threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "> \n> Hi Bruce!\n> \n> > I have already asked for you to try a change to template/linux_m68k\n> > by changing the optimization -O2 to -O and see if you still need the\n> > postgres.h fmgr_ptr change you did. I assume you are using egcs, right?\n> \n> Can't remember that you asked me... But anyway, it wouldn't help. It's\n> defined in the SysV/m68k ABI that %d0 is used for scalar return values\n> and %a0 for pointer values. Both gcc and egcs do it like this, and\n> it's also independent from optimization level. (And, BTW, I didn't use\n> egcs.)\n> \n> This behaviour is one of the most prominent porting problems to m68k.\n> ANSI C says results are undefined if you call a function via pointer\n> and the pointer is declared to return another type than the function\n> actually returns. So m68k compilers conform to the standard here.\n> However, most programmers never expect such problems... also because\n> on most architectures it works without probs, because all values are\n> returned in the same register.\n\nYes, we admit that we break the standard with fmgr_ptr, because we\nreturn a variety of values depending on what function they call. It\nappears the egcs optimization on the powerpc or alpha cause a problem\nwhen optimization is -O2, but not -O. We may see more platforms with\nproblems as optimizers get smarter.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 14 Jun 1999 17:53:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] Patch for m68k architecture (fwd)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> ANSI C says results are undefined if you call a function via pointer\n>> and the pointer is declared to return another type than the function\n>> actually returns. So m68k compilers conform to the standard here.\n\n> Yes, we admit that we break the standard with fmgr_ptr, because we\n> return a variety of values depending on what function they call. It\n> appears the egcs optimization on the powerpc or alpha cause a problem\n> when optimization is -O2, but not -O. We may see more platforms with\n> problems as optimizers get smarter.\n\nSeeing as how we also know that the function-call interface ought to be\nredesigned to handle NULLs better, maybe we should just bite the bullet\nand fix all of these problems at once by adopting a new standard\ninterface for everything that can be called via fmgr. It'd uglify the\ncode, no doubt, but I think we are starting to see an accumulation of\nproblems that justify doing something.\n\nHere is a straw-man proposal:\n\n Datum function (bool *resultnull,\n Datum *args,\n bool *argnull,\n int nargs)\n\nargs[i] is the i'th parameter, or undefined (perhaps always 0?)\nwhen argnull[i] is true. The function is responsible for setting\n*resultnull, and returns a Datum value if *resultnull is false.\nMost standard functions could ignore nargs since they'd know what it\nshould be, but we ought to pass it for flexibility.\n\nA useful addition to this scheme would be for fmgr to preset *resultnull\nto the OR of the input argnull[] array just before calling the function.\nIn the typical case where the function is \"strict\" (ie, result is NULL\nif any input is NULL), this would save the function from having to look\nat argnull[] at all; it'd just check *resultnull and immediately return\nif true.\n\nAs an example, int4 addition goes from\n\nint32\nint4pl(int32 arg1, int32 arg2)\n{\n return arg1 + arg2;\n}\n\nto\n\nDatum\nint4pl (bool *resultnull, Datum *args, bool *argnull, int nargs)\n{\n if (*resultnull)\n return (Datum) 0; /* value doesn't really matter ... */\n /* we can ignore argnull and nargs */\n\n return Int32GetDatum(DatumGetInt32(args[0]) + DatumGetInt32(args[1]));\n}\n\nThis is, of course, much uglier than the existing code, but we might be\nable to improve matters with some well-chosen macros for the boilerplate\nparts. What we actually end up writing might look something like\n\nDatum\nint4pl (PG_FUNCTION_ARGS)\n{\n PG_STRICT_FUNCTION(\t\t\t/* encapsulates null check */\n PG_ARG0_INT32;\n PG_ARG1_INT32;\n\n\tPG_RESULT_INT32( arg0 + arg1 );\n );\n}\n\nwhere the macros expand to things like \"int32 arg0 = DatumGetInt32(args[0])\"\nand \"return Int32GetDatum( x )\". It'd be worth a little thought to\ntry to set up a group of macros like that, I think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jun 1999 20:51:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Cleaning up function interface (was Re: Patch for m68k architecture)"
},
{
"msg_contents": "\nHi Bruce!\n\n> Yes, we admit that we break the standard with fmgr_ptr, because we\n> return a variety of values depending on what function they call.\n\nYep... the correct thing would be to cast all such return values to a\ncommon type (e.g. long) and cast them back in the caller.\n\n> It appears the egcs optimization on the powerpc or alpha cause a\n> problem when optimization is -O2, but not -O.\n\nCan be like this on those archs. On m68k, however, the registers for\nfunction return values are the same independent of optimization level.\n\n> We may see more platforms with problems as optimizers get smarter.\n\nYep...\n\nRoman\n",
"msg_date": "Tue, 15 Jun 1999 09:05:41 +0200 (MET DST)",
"msg_from": "Roman Hodek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Patch for m68k architecture (fwd)"
},
{
"msg_contents": "\n> Seeing as how we also know that the function-call interface ought to\n> be redesigned to handle NULLs better, maybe we should just bite the\n> bullet and fix all of these problems at once by adopting a new\n> standard interface for everything that can be called via fmgr.\n[...]\n\nThis all looks fine. At least it would solve the current function\nreturn value problem on m68k.\n\nRoman\n",
"msg_date": "Tue, 15 Jun 1999 09:09:43 +0200 (MET DST)",
"msg_from": "Roman Hodek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up function interface (was Re: Patch for m68k\n\tarchitecture)"
}
] |
[
{
"msg_contents": "\nFirst of all, Great job on the release!\n\nOne little detail about the installation though:\n\n The main directory when one unpacks the postgres-6.5.tar.gz tarball is \n incorrectly named 'postgresl-6.5', missing the q. \n\n\n/Daniel\n_______________________________________________________________ /\\__ \n Daniel Lundin - MediaCenter \\/\n http://www.umc.se/~daniel/\n",
"msg_date": "15 Jun 1999 08:42:08 -0000",
"msg_from": "Daniel Lundin <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.5 Relase Typo"
},
{
"msg_contents": "\nFixed...thanks to everyone that beam'd me a message on this :(\n\n\nOn 15 Jun 1999, Daniel Lundin wrote:\n\n> \n> First of all, Great job on the release!\n> \n> One little detail about the installation though:\n> \n> The main directory when one unpacks the postgres-6.5.tar.gz tarball is \n> incorrectly named 'postgresl-6.5', missing the q. \n> \n> \n> /Daniel\n> _______________________________________________________________ /\\__ \n> Daniel Lundin - MediaCenter \\/\n> http://www.umc.se/~daniel/\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 08:59:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 6.5 Relase Typo"
}
] |
[
{
"msg_contents": "Here are the changes to get a clean regression test run under AIX.\n\nIt appears that only minor changes in precision and messages occur for\nthe int* and geometry classes.\n\nAll the time related tests fail. I can't tell if this is really a bug\nor a problem. I tried two scenarios and both failed:\n\n 1) export TZ=PST8PDT\n gmake bigtest\n\n 2) kill postgres\n export TZ=PST8PDT\n start postgres\n gmake bigtest\n\nAttached are the results.out files that were created. I don't have access\nto the CVS tree, so someone will need to drop them for me.\n\nThanks.",
"msg_date": "Tue, 15 Jun 1999 06:05:53 -0500",
"msg_from": "\"David R. Favor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AIX 432 regression test expected results changes"
}
] |
[
{
"msg_contents": "I am having trouble with PQclear causing a segmentation fault, and I don't\nreally know where to look. I have 3 identical tables (only 2 come into play)\nall of the form\n\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| leaf1 | int4 not null | 4 |\n| leaf2 | int4 not null | 4 |\n+----------------------------------+----------------------------------+-------+\n\nand the last thing that happens before trouble is:\n\nSELECT leaf2 FROM pers_room WHERE leaf1=1\nSELECT leaf1 FROM pers_room WHERE leaf2=1\nSELECT leaf2 FROM pers_comp WHERE leaf1=1\nSegmentation fault (core dumped)\n\n#0 0x40112370 in free ()\n#1 0x400b5060 in _GLOBAL_OFFSET_TABLE_ () at pqsignal.c:42\n#2 0x400ad3be in PQclear (res=0x76280) at fe-exec.c:325\n#3 0x400a6181 in PgConnection::Exec (this=0xefbfd44c, \n query=0x76280 \"SELECT leaf2 FROM pers_comp WHERE leaf1=1\")\n at pgconnection.cc:98\n\n323 /* Free the top-level tuple pointer array */\n324 if (res->tuples)\n325 free(res->tuples);\n\nand I suspect from the manpage:\n\n Otherwise, if the argument does not match a pointer\n earlier returned by the calloc() malloc() or realloc() function, or if\n the space has been deallocated by a call to free() or realloc(), general\n havoc may occur.\n\nAs you see from the backtrace, I am using libpq++. Anyone have a suggestion\nwhere to look?\n\nCheers,\n\nPatrick\n",
"msg_date": "Tue, 15 Jun 1999 12:55:20 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory problem?"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n> I am having trouble with PQclear causing a segmentation fault, and I don't\n> really know where to look.\n\nNot at PQclear(); almost surely, the bug lies elsewhere. The most\nlikely bets are (a) that PQclear is being called twice for the same\nPGresult (although this looks unlikely with the current libpq++,\nsince it doesn't give the calling app direct access to the PGresult),\nor (b) that some random other bit of code is clobbering memory that\ndoesn't belong to it. When you make a mistake like writing a little\nbit past the end of a malloc'd piece of memory, the usual symptom is\ncoredumps in later malloc or free operations, because what you've\nclobbered is malloc's memory management data structures.\n\nUnfortunately that means the bug might be almost anywhere else in\nyour app :-(. Good luck...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jun 1999 09:55:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory problem? "
},
{
"msg_contents": "On Tue, 15 Jun 1999, Patrick Welche wrote:\n\n> I am having trouble with PQclear causing a segmentation fault, and I don't\n> really know where to look. I have 3 identical tables (only 2 come into play)\n> all of the form\n> \n\n[snip]\n\n\n> As you see from the backtrace, I am using libpq++. Anyone have a suggestion\n> where to look?\n> \n> Cheers,\n> \n> Patrick\n> \n> \n\nHow recent is the libpq++ that you're using?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 15 Jun 1999 11:15:03 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory problem?"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> How recent is the libpq++ that you're using?\n\ncvs from yesterday.\n\nCheers,\n\nPatrick\n(Still hunting for the needle...)\n",
"msg_date": "Tue, 15 Jun 1999 16:43:49 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Memory problem?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Patrick Welche\" <[email protected]> writes:\n> > I am having trouble with PQclear causing a segmentation fault, and I don't\n> > really know where to look.\n> \n> Not at PQclear(); almost surely, the bug lies elsewhere.\n...\n> Unfortunately that means the bug might be almost anywhere else in\n> your app :-(. Good luck...\n\nSure enough - pass the PgDatabase as a reference was the solution!\n\nCheers,\n\nPatrick\n",
"msg_date": "Fri, 18 Jun 1999 13:03:14 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Memory problem?"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n>>>> I am having trouble with PQclear causing a segmentation fault, and I don't\n>>>> really know where to look.\n\n> Sure enough - pass the PgDatabase as a reference was the solution!\n\nHmm. If copying a PgDatabase object doesn't work, then the copy\nconstructor and assignment operators for it ought to be disabled\n(by declaring them private). Vince?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 10:23:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory problem? "
},
{
"msg_contents": "On Fri, 18 Jun 1999, Tom Lane wrote:\n\n> \"Patrick Welche\" <[email protected]> writes:\n> >>>> I am having trouble with PQclear causing a segmentation fault, and I don't\n> >>>> really know where to look.\n> \n> > Sure enough - pass the PgDatabase as a reference was the solution!\n> \n> Hmm. If copying a PgDatabase object doesn't work, then the copy\n> constructor and assignment operators for it ought to be disabled\n> (by declaring them private). Vince?\n\nYep. It's at the top of the list for the next updates. I had noticed\na few things I wasn't all that thrilled with - some of 'em slipped by\nand I didn't see 'em until I sent in the patches. The docs are way\nout of date too. There's a couple odds and ends I need to tie up on\nthe web pages first, tho.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 18 Jun 1999 10:40:11 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Memory problem? "
}
] |
[
{
"msg_contents": "\nAnd over 500 copies downloaded from the main site so far...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 09:45:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Less then 12hrs since release ..."
},
{
"msg_contents": "On Tue, 15 Jun 1999, The Hermit Hacker wrote:\n\n> \n> And over 500 copies downloaded from the main site so far...\n\nCool!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 15 Jun 1999 10:42:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
}
] |
[
{
"msg_contents": "That's not bad :-)\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Tuesday, June 15, 1999 1:45 PM\nTo: [email protected]\nSubject: [HACKERS] Less then 12hrs since release ...\n\n\n\nAnd over 500 copies downloaded from the main site so far...\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 14:05:54 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "On Tue, 15 Jun 1999, Peter Mount wrote:\n\n> That's not bad :-)\n\nActually, miscounted...I'm having problems with the logging used in\nBero-FTPD, where I can't get the PostgreSQL logs seperated from the main\nlogs *sigh*\n\n321 since this morning...still not bad, considering that in alot of\nplaces, ppl aren't awake yet :)\n\n > \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> \n> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> Sent: Tuesday, June 15, 1999 1:45 PM\n> To: [email protected]\n> Subject: [HACKERS] Less then 12hrs since release ...\n> \n> \n> \n> And over 500 copies downloaded from the main site so far...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 10:16:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Less then 12hrs since release ..."
}
] |
[
{
"msg_contents": "It hasn't even been mentioned on Freshmeat, Linux today and so on.\n\nWho will take care of that?\n\n",
"msg_date": "Tue, 15 Jun 1999 15:19:50 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "On Tue, 15 Jun 1999, Kaare Rasmussen wrote:\n\n> It hasn't even been mentioned on Freshmeat, Linux today and so on.\n> \n> Who will take care of that?\n\nits already been submitted to FreshMeat, but havent' got a clue how to\nsubmit to any of the others :(\n\nCan someone send us a list of URLs that we should be announcing this to?\nCC'd to [email protected], so that I can get \"the guy in the office\" working\non it? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 11:04:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "On Tue, 15 Jun 1999, The Hermit Hacker wrote:\n> its already been submitted to FreshMeat, but havent' got a clue how to\n> submit to any of the others :(\n> \n> Can someone send us a list of URLs that we should be announcing this to?\n> CC'd to [email protected], so that I can get \"the guy in the office\" working\n> on it? :)\n\n http://lwn.net/daily/\n http://www.xshare.com/\n\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Tue, 15 Jun 1999 18:14:29 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "> On Tue, 15 Jun 1999, The Hermit Hacker wrote:\n> > its already been submitted to FreshMeat, but havent' got a clue how to\n> > submit to any of the others :(\n> > \n> > Can someone send us a list of URLs that we should be announcing this to?\n> > CC'd to [email protected], so that I can get \"the guy in the office\" working\n> > on it? :)\n> \n> http://lwn.net/daily/\n> http://www.xshare.com/\n\nI got the xshare guy to subscribe to the annouce mailing list. I\nrecommend we try and do the same with the others, so he know\nimmediately. I haven't see anything on xshare yet.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Jun 1999 10:34:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "\nSubmitted to those two, thanks...\n\nOn Tue, 15 Jun 1999, Oleg Broytmann wrote:\n\n> On Tue, 15 Jun 1999, The Hermit Hacker wrote:\n> > its already been submitted to FreshMeat, but havent' got a clue how to\n> > submit to any of the others :(\n> > \n> > Can someone send us a list of URLs that we should be announcing this to?\n> > CC'd to [email protected], so that I can get \"the guy in the office\" working\n> > on it? :)\n> \n> http://lwn.net/daily/\n> http://www.xshare.com/\n> \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 11:37:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
},
{
"msg_contents": "I will\n\nOn Tue, 15 Jun 1999, Kaare Rasmussen wrote:\n\n> It hasn't even been mentioned on Freshmeat, Linux today and so on.\n> \n> Who will take care of that?\n> \n> \n\n=========================================================================\nJeff MacDonald // Hub.org Networking Services // PostgreSQL INC\[email protected] // [email protected] // [email protected]\nhttp://hub.org/~jeff // http://hub.org // http://pgsql.com\n=========================================================================\n\n",
"msg_date": "Tue, 15 Jun 1999 18:55:21 -0300 (ADT)",
"msg_from": "Jeff MacDonald <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Less then 12hrs since release ..."
}
] |
[
{
"msg_contents": "I've just tried it and it worked (although it was a little slow).\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n-----Original Message-----\nFrom: Wayne [mailto:[email protected]]\nSent: Monday, June 14, 1999 12:33 PM\nTo: [email protected]\nSubject: [HACKERS] off-topic: pgaccess?\n\n\nHi,\nDose anyone know if www.flex.ro is still around? I've tried several time\nto gain access\nto the server with no luck. Sorry for the off topic traffic.\nWayne\n\n",
"msg_date": "Tue, 15 Jun 1999 14:57:54 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] off-topic: pgaccess?"
}
] |
[
{
"msg_contents": "It would seem to be appropriate for the Postgres team to assume\nmanagement of the LDP PostgreSQL-HOWTO as a group effort. Does anyone\nknow the particulars of LDP: where they are located, how doc ownership\nis assigned, etc.?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 15 Jun 1999 14:01:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux Documentation Project"
},
{
"msg_contents": "On Tue, 15 Jun 1999, Thomas Lockhart wrote:\n\n> It would seem to be appropriate for the Postgres team to assume\n> management of the LDP PostgreSQL-HOWTO as a group effort. Does anyone\n> know the particulars of LDP: where they are located, how doc ownership\n> is assigned, etc.?\n> \n> - Thomas\n\nGreg Hankins is the coordinator...I have three addresses listed:\n\[email protected]\n\[email protected]\[email protected]\n\nURL: metalab.unc.edu/LDP\n\nHe is very helpful...maybe we could also get sunsite (now\nmetalab) to update the release of pg they have available for\ndownload.\n\nIn http://metalab.unc.edu/pub/Linux/apps/database/postgresSQL/\n ^^^^ (sic)\nThey offer only postgresql-6.2.tar.gz\n\n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n",
"msg_date": "Wed, 16 Jun 1999 07:37:05 -0400 (EDT)",
"msg_from": "Thomas Good <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Documentation Project"
}
] |
[
{
"msg_contents": ">It hasn't even been mentioned on Freshmeat, Linux today and so on.\n>\n>Who will take care of that?\n\nPost has been made to slashdot.\n\n- Brandon\n\n\n\n------------------------------------------------------\nSmith Computer Lab Administrator,\nCase Western Reserve University\n [email protected]\n 216 - 368 - 5066\n http://cwrulug.cwru.edu\n------------------------------------------------------\n\nPGP Public Key Fingerprint: 1477 2DCF 8A4F CA2C 8B1F 6DFE 3B7C FDFB\n\n\n",
"msg_date": "Tue, 15 Jun 1999 10:51:25 -0400",
"msg_from": "\"Brandon Palmer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS} Less than 12hrs since release"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to take a look at pgaccess by going to the wep site\nwww.flex.ro and seeing the\nscreen layouts. But, I'm still unable to get thru. Dose anyone know of a\nmirror for\nwww.flex.ro?\nTIA.\nWayne\n\n",
"msg_date": "Tue, 15 Jun 1999 11:57:50 -0400",
"msg_from": "Wayne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Evaluating Front ends to PG."
}
] |
[
{
"msg_contents": "\nWe're published...kinda :)\n\n\t\thttp://lwn.net/daily\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Jun 1999 16:07:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux Weekly News ..."
}
] |
[
{
"msg_contents": "I'm attempting to create a custom image object for my database. At this point the \nobject contains only two variable length text fields and two size specifiers. Attatched below \nis my code for the object and the image_in function. Whenever I attempt to insert something \nusing the image_in function I get one of two errors. \n\t\n\ttest2=> insert into images2 values ('ad, fg, 32, 64');\nNOTICE: a\nNOTICE: ad, fg, 32, 64\nNOTICE: ad\nNOTICE: fg\nNOTICE: 32\nNOTICE: 64\nNOTICE: f\nNOTICE: g\nNOTICE: r\nNOTICE: c\nNOTICE: d\nNOTICE: ad\nNOTICE: fg\nERROR: Tuple is too big: size 52240\n\nor\n\ntest2=> insert into images2 values ('ad, fg, 32, 64');\nNOTICE: a\nNOTICE: ad, fg, 32, 64\nNOTICE: ad\nNOTICE: fg\nNOTICE: 32\nNOTICE: 64\nNOTICE: f\nPQexec() -- Request was sent to backend, but backend closed the channel before responding.\n This probably means the backend terminated abnormally before or while processing the \nrequest.\n \n I am at a loss to explain either of these errors can anyone give me a hand?\n \n \n\t\n/*////////////////////////////////////////////////////////////////\n// ImageObject.c\n// Image Object and input/output definitions for the image type\n// in the PostgresSQl database. Defined for use by the Computer\n// Vision Laboratory at Umass Amherst.\n// \n// Collin Lynch\n// 6/13/1999\n////////////////////////////////////////////////////////////////*/\n\n/* Includes*/\n#include <stdio.h>\n#include <string.h>\n#include \"postgres.h\"\n#include \"libpq-fe.h\"\n#include \"utils/elog.h\"\n#include \"utils/palloc.h\"\n\n\n/*Struct definition.*/\ntypedef struct variable_text {\n\n int4 length;\n char data[1];\n} variable_text;\n\ntypedef struct image {\n\n int4 size; /* The memory size of the image object.*/\n variable_text* name; /* The Image Name.*/\n variable_text* type; /* The Image Encoding Type.*/\n int4 width; /* Thw width of the image.*/\n int4 height; /* The height of the image.*/\n} image;\n\n\n/*Definition of the input function.*/\nimage* image_in(char* imagestring) {\n\n image* result;\n char* name;\n char* encoding;\n int height, width, size;\n char* temp;\n\n name = (char *) palloc(sizeof(char) * 40);\n encoding = (char *)palloc(sizeof(char) * 30);\n temp = (char *)palloc(sizeof(char) * 2);\n\n elog(0, \"a\");\n\n elog(0, imagestring);\n\n if(sscanf(imagestring, \"%[^,]%*[, ]%[^,]%*[, ]%i%*[, ]%i)\", name, encoding, &width, &height) != 4) {\n\n elog(0, temp);\n elog(1, \"image_in: Parse Error.\");\n return NULL;\n }\n\n elog(0, name);\n elog(0, encoding);\n itoa(width, temp);\n elog(0, temp);\n itoa(height, temp);\n elog(0, temp);\n\n result = (image *)palloc(sizeof(image));\n\n elog(0, \"f\");\n\n result->name = (variable_text *)palloc(VARHDRSZ + VARSIZE(name));\n memmove(result->name->data, name, VARSIZE(name));\n result->name->length = VARHDRSZ + VARSIZE(name);\n\n elog(0, \"g\");\n\n result->type = (variable_text *)palloc(VARHDRSZ + VARSIZE(name));\n memmove(result->type->data, encoding, VARSIZE(encoding));\n result->type->length = VARHDRSZ + VARSIZE(encoding);\n\n elog(0, \"r\");\n\n result->width = width;\n result->height = height;\n\n elog(0, \"c\");\n\n /*Set the size of the image object.*/\n result->size = (VARHDRSZ * 3) + (sizeof(int4) * 2) + VARSIZE(result->name) + VARSIZE(result->type);\n \n elog(0, \"d\");\n\n elog(0, result->name->data);\n elog(0, result->type->data);\n \n pfree(name);\n pfree(encoding);\n\n\n return(result);\n}\n\n\n\n/*Definition of the output function.*/\nchar* image_out(image* object) {\n\n char* result;\n if (object == NULL) {\n return NULL;\n }\n result = (char *)palloc(sizeof(char) * 60);\n sprintf(result, \"(%c.%c %f, %f)\", object->name, object->type, object->width, object->height);\n return(result);\n}",
"msg_date": "Tue, 15 Jun 1999 15:16:18 -0400 (EDT)",
"msg_from": "\"Collin F. Lynch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuples"
}
] |
[
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> \tHi alls\n> \n> \tI'm working on a port of postgres on BeOS (www.be.com). BeOS is not \n> a real UNIX, but it provide a subset of the posix API. At this stage \n> I've a working version ofit. But since 6.4.2, I've a lot of problems \n> (dynamic loading doesn't work any more...) with the fact that \n> postgresmain is call directly instead of the old exec method. BeOS \n> really don't like to do a lot of thing after a fork and before an exec \n> :=(. \n> \tI would like to know how hard it would be to add the exec call. As \n> I understand it, I have to get back all global variables and shared \n> memory and perhaps doing something with sockets/file descriptors ? I've \n> a ready solution for shared memory but I need some help regarding the \n> others points.\n\nYou can put back the exec fairly easily. You just need to pass the\nproper parameters, and change the fork to an exec. You can look at the\nolder code that did the exec for an example, and #ifdef the exec() back\ninto the code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Jun 1999 18:09:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] BeOS port"
},
{
"msg_contents": "\tHi alls\n\n\tI'm working on a port of postgres on BeOS (www.be.com). BeOS is not \na real UNIX, but it provide a subset of the posix API. At this stage \nI've a working version ofit. But since 6.4.2, I've a lot of problems \n(dynamic loading doesn't work any more...) with the fact that \npostgresmain is call directly instead of the old exec method. BeOS \nreally don't like to do a lot of thing after a fork and before an exec \n:=(. \n\tI would like to know how hard it would be to add the exec call. As \nI understand it, I have to get back all global variables and shared \nmemory and perhaps doing something with sockets/file descriptors ? I've \na ready solution for shared memory but I need some help regarding the \nothers points.\n\n\tAny will really help me.\n\n\t\tcyril\n",
"msg_date": "Tue, 15 Jun 1999 23:43:31 CEST",
"msg_from": "\"Cyril VELTER\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "BeOS port"
}
] |
[
{
"msg_contents": "Standard (draft) says:\n\n Regardless of the isolation level of the SQL-transaction, phenomena\n P1, P2, and P3 shall not occur during the implied reading of schema\n definitions performed on behalf of executing an SQL-statement, the\n checking of integrity constraints, and the execution of referen-\n tial actions associated with referential constraints. \n\nI'm not sure what they exactly mean. Could someone run two tests\nfor me (in Oracle and Informix/Sybase)?\n\ncreate table p (k integer primary key);\ncreate table f (k integer references p(k));\n\nsession-1:\nset transaction isolation mode serializable;\nselect * from f; -- just to ensure that xaction began -:)\n\nsession-2:\ninsert into p values (1);\ncommit;\n\nsession-1:\ninsert into f values (1);\n--\n-- Results? Abort?\n--\n\nWhat's the result in the case of read committed isolevel in\nsession-1? Is insert succeeded?\n\nTIA!\n\nVadim\n",
"msg_date": "Wed, 16 Jun 1999 11:55:33 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Q: RI and isolevels"
}
] |
[
{
"msg_contents": "% psql test1\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: test1\n\ntest1=> select count(*), max(\"ID\"), min(\"ID\"), avg(\"ID\") from \"ItemsBars\";\n count| max| min| avg\n------+-------+-----+----\n677719|3075717|61854|-251\n(1 row)\n\nOverflow, perhaps?\n\nGene Sokolov.\n\n\n",
"msg_date": "Wed, 16 Jun 1999 10:41:39 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "> [PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n> test1=> select count(*), max(\"ID\"), min(\"ID\"), avg(\"ID\") from \"ItemsBars\";\n> count| max| min| avg\n> ------+-------+-----+----\n> 677719|3075717|61854|-251\n> (1 row)\n> Overflow, perhaps?\n\nOf course. These are integer fields? I've been considering changing\nall accumulators (and results) for integer aggregate functions to\nfloat8, but have not done so yet. I was sort of waiting for a v7.0\nrelease, but am not sure why...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 13:03:46 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "> > [PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n> > test1=> select count(*), max(\"ID\"), min(\"ID\"), avg(\"ID\") from\n\"ItemsBars\";\n> > count| max| min| avg\n> > ------+-------+-----+----\n> > 677719|3075717|61854|-251\n> > (1 row)\n> > Overflow, perhaps?\n>\n> Of course. These are integer fields? I've been considering changing\n\nYes, the fields are int4\n\n> all accumulators (and results) for integer aggregate functions to\n> float8, but have not done so yet. I was sort of waiting for a v7.0\n> release, but am not sure why...\n\nFloat8 accumulator seems to be a good solution if AVG is limited to\nint/float types. I wonder if it could produce system dependency in AVG due\nto rounding errors. Some broader solution should be considered though if you\nwant AVG to work on numeric/decimal as well.\n\nGene Sokolov.\n\n\n",
"msg_date": "Wed, 16 Jun 1999 17:27:00 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "> Float8 accumulator seems to be a good solution if AVG is limited to\n> int/float types. I wonder if it could produce system dependency in AVG due\n> to rounding errors. Some broader solution should be considered though if you\n> want AVG to work on numeric/decimal as well.\n\nThe implementation can be specified for each datatype individually, so\nthat's not a problem. afaik the way numeric/decimal work it would be\nfine to use those types as their own accumulators. It's mostly the\nint2/int4/int8 types which are the problem, since they silently\noverflow (on most machines?).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 14:03:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n>\n> > [PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n> > test1=> select count(*), max(\"ID\"), min(\"ID\"), avg(\"ID\") from \"ItemsBars\";\n> > count| max| min| avg\n> > ------+-------+-----+----\n> > 677719|3075717|61854|-251\n> > (1 row)\n> > Overflow, perhaps?\n>\n> Of course. These are integer fields? I've been considering changing\n> all accumulators (and results) for integer aggregate functions to\n> float8, but have not done so yet. I was sort of waiting for a v7.0\n> release, but am not sure why...\n\n Wouldn't it be better to use NUMERIC for the avg(int) state\n values? It will never loose any significant digit.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 16 Jun 1999 16:06:58 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "> > Of course. These are integer fields? I've been considering changing\n> > all accumulators (and results) for integer aggregate functions to\n> > float8, but have not done so yet. I was sort of waiting for a v7.0\n> > release, but am not sure why...\n> \n> Wouldn't it be better to use NUMERIC for the avg(int) state\n> values? It will never loose any significant digit.\n\nSure. It would be fast, right? avg(int) is likely to be used a lot,\nand should be as fast as possible.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 14:20:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> test1=> select count(*), max(\"ID\"), min(\"ID\"), avg(\"ID\") from \"ItemsBars\";\n> count| max| min| avg\n> ------+-------+-----+----\n> 677719|3075717|61854|-251\n\n> Overflow, perhaps?\n\nsum() and avg() for int fields use int accumulators. You might want\nto use avg(float8(field)) to get a less-likely-to-overflow result.\n\nSomeday it'd be a good idea to revise the sum() and avg() aggregates\nto use float or numeric accumulators in all cases. This'd require\ninventing a few more cross-data-type operators...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 10:37:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( ) "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Some broader solution should be considered though if you\n>> want AVG to work on numeric/decimal as well.\n\n> The implementation can be specified for each datatype individually,\n\nIn the current implementation, each datatype does use its own type as\nthe accumulator --- and also as the counter. float8 and numeric are\nfine, float4 is sort of OK (a float8 accumulator would be better for\naccuracy reasons), int4 loses, int2 loses *bad*.\n\nTo fix it we'd need to invent operators that do the appropriate cross-\ndata-type operations. For example, int4 avg using float8 accumulator\nwould need \"float8 + int4 yielding float8\" and \"float8 / int4 yielding\nint4\", neither of which are to be found in pg_proc at the moment. But\nit's a straightforward thing to do.\n\nint8 is the only integer type that I wouldn't want to use a float8\naccumulator for. Maybe numeric would be the appropriate thing here,\nslow though it be.\n\nNote that switching over to float accumulation would *not* be real\npalatable until we have fixed the memory-leak issue. avg() on int4\ndoesn't leak memory currently, but it would with a float accumulator...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 10:52:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( ) "
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n>\n> > > Of course. These are integer fields? I've been considering changing\n> > > all accumulators (and results) for integer aggregate functions to\n> > > float8, but have not done so yet. I was sort of waiting for a v7.0\n> > > release, but am not sure why...\n> >\n> > Wouldn't it be better to use NUMERIC for the avg(int) state\n> > values? It will never loose any significant digit.\n>\n> Sure. It would be fast, right? avg(int) is likely to be used a lot,\n> and should be as fast as possible.\n\n I think it would be fast enough, even if I have things in\n mind how to speed it up. But that would result in a total\n rewrite of NUMERIC from scratch.\n\n The only math function of NUMERIC which is time critical for\n AVG() is ADD. And even for int8 the number of digits it has\n to perform is relatively small. I expect the time spent on\n that is negligible compared to the heap scanning required to\n get all the values.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 17 Jun 1999 01:20:23 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
}
] |
[
{
"msg_contents": "\nIn Informix:\n\n> Standard (draft) says:\n> \n> Regardless of the isolation level of the SQL-transaction, phenomena\n> P1, P2, and P3 shall not occur during the implied reading of schema\n> definitions performed on behalf of executing an SQL-statement, the\n> \nan alter table, create index or the like (tx must see new schema)\n\n> checking of integrity constraints, and the execution of referen-\n> tial actions associated with referential constraints. \n> \n> I'm not sure what they exactly mean. Could someone run two tests\n> for me (in Oracle and Informix/Sybase)?\n> \n> create table p (k integer primary key);\n> create table f (k integer references p(k));\n> \n> session-1:\nbegin work;\n> set transaction isolation level serializable;\n Informix needs: ^^^^^ level not mode\n> select * from f; -- just to ensure that xaction began -:)\n> \n> session-2:\nbegin work;\n> insert into p values (1);\n> commit work;\n> \n> session-1:\n> insert into f values (1);\n> --\n> -- Results? Abort?\n> --\n> \nGoes ok in both isolation levels. Only if session-2 insert is not committed,\nthe session-1 insert fails with:\n 691: Missing key in referenced table for referential constraint\n(zeu.r155_262).\n 144: ISAM error: key value locked\n\n> What's the result in the case of read committed isolevel in\n> session-1? Is insert succeeded?\n> \nYes.\n\nAndreas\n",
"msg_date": "Wed, 16 Jun 1999 10:09:44 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Q: RI and isolevels"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 wrote:\n> \n> In Informix:\n> > set transaction isolation level serializable;\n> Informix needs: ^^^^^ level not mode\n\nThis was my fault...\n\n> > session-1:\n> > insert into f values (1);\n> > --\n> > -- Results? Abort?\n> > --\n> >\n> Goes ok in both isolation levels. Only if session-2 insert is not committed,\n\nWell... Thanks!\nThe problem for us and Oracle: subsequent selects from p in\nsession-1 will not return key 1... So, I would like to know\nwhat Oracle does...\n\nVadim\n",
"msg_date": "Wed, 16 Jun 1999 16:38:42 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q: RI and isolevels"
}
] |
[
{
"msg_contents": "To have them I need to add tuple id (6 bytes) to heap tuple\nheader. Are there objections? Though it's not good to increase \ntuple header size, subj is, imho, very nice feature...\n\nImplementation is , hm, \"easy\":\n\n- heap_insert/heap_delete/heap_replace/heap_mark4update will\n remember updated tid (and current command id) in relation cache\n and store previously updated tid (remembered in relation cache)\n in additional heap header tid;\n- lmgr will remember command id when lock was acquired;\n- for a savepoint we will just store command id when\n the savepoint was setted;\n- when going to sleep due to concurrent the-same-row update,\n backend will store MyProc and tuple id in shmem hash table.\n\nWhen rolling back to a savepoint, backend will:\n\n- release locks acquired after savepoint;\n- for a relation updated after savepoint, get last updated tid \n from relation cache, walk through relation, set \n HEAP_XMIN_INVALID/HEAP_XMAX_INVALID in all tuples updated \n after savepoint and wake up concurrent writers blocked\n on these tuples (using shmem hash table mentioned above).\n\nThe last feature (waking up of concurrent writers) is most hard\npart to implement. AFAIK, Oracle 7.3 was not able to do it.\nCan someone comment is this feature implemented in Oracle 8.X,\nother DBMSes?\n\nNow about implicit savepoints. Backend will place them before\nuser statements execution. In the case of failure, transaction\nstate will be rolled back to the one before execution of query.\nAs side-effect, this means that we'll get rid of complaints\nabout entire transaction abort in the case of mistyping\ncausing abort due to parser errors...\n\nComments?\n\nVadim\n",
"msg_date": "Wed, 16 Jun 1999 21:12:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Savepoints..."
},
{
"msg_contents": "> To have them I need to add tuple id (6 bytes) to heap tuple\n> header. Are there objections? Though it's not good to increase \n> tuple header size, subj is, imho, very nice feature...\n\nGee, that's a lot of overhead. We would go from 40 bytes ->46 bytes.\n\nHow is this different from the tid or oid? Reading your description, I\nsee there probably isn't another way to do it.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Jun 1999 10:00:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Savepoints..."
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Wednesday, June 16, 1999 10:13 PM\n> To: PostgreSQL Developers List\n> Subject: [HACKERS] Savepoints...\n> \n> \n> To have them I need to add tuple id (6 bytes) to heap tuple\n> header. Are there objections? Though it's not good to increase \n> tuple header size, subj is, imho, very nice feature...\n> \n> Implementation is , hm, \"easy\":\n> \n> - heap_insert/heap_delete/heap_replace/heap_mark4update will\n> remember updated tid (and current command id) in relation cache\n> and store previously updated tid (remembered in relation cache)\n> in additional heap header tid;\n\n> - lmgr will remember command id when lock was acquired;\n\nDoes this mean that many writing commands in a transaction \nrequire many command id-s to remember ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Thu, 17 Jun 1999 12:20:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Savepoints..."
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > - lmgr will remember command id when lock was acquired;\n> \n> Does this mean that many writing commands in a transaction\n> require many command id-s to remember ?\n\nDid you mean such cases:\n\nbegin;\n...\nupdate t set...;\n...\nupdate t set...;\n...\nend;\n\n?\n\nWe'll remember command id for the first \"update t\" only\n(i.e. for the first ROW EXCLUSIVE mode lock over table t).\n\nVadim\n",
"msg_date": "Thu, 17 Jun 1999 11:58:02 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Savepoints..."
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Thursday, June 17, 1999 12:58 PM\n> To: Hiroshi Inoue\n> Cc: PostgreSQL Developers List\n> Subject: Re: [HACKERS] Savepoints...\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > > - lmgr will remember command id when lock was acquired;\n> > \n> > Does this mean that many writing commands in a transaction\n> > require many command id-s to remember ?\n> \n> Did you mean such cases:\n>\n\nYes.\n \n> begin;\n> ...\n> update t set...;\n> ...\n> update t set...;\n> ...\n> end;\n> \n> ?\n> \n> We'll remember command id for the first \"update t\" only\n> (i.e. for the first ROW EXCLUSIVE mode lock over table t).\n>\n\nHow to reduce lock counter for ROW EXCLUSIVE mode lock \nover table t?\n\n\nAnd more questions.\n\nHEAP_MARKED_FOR_UPDATE state could be rollbacked ?\n\nFor example\n\n..\n[savepoint 1]\nselect .. from t1 where key=1 for update;\n[savepoint 2]\nselect .. from t1 where key=1 for update;\n[savepoint 3]\nupdate t1 set .. where key=1;\n\nRollback to savepoint 3 OK ?\nRollback to savepoint 2 OK ?\nRollback to savepoint 1 OK ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Thu, 17 Jun 1999 13:12:34 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Savepoints..."
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > We'll remember command id for the first \"update t\" only\n> > (i.e. for the first ROW EXCLUSIVE mode lock over table t).\n> >\n> \n> How to reduce lock counter for ROW EXCLUSIVE mode lock\n> over table t?\n\nNo reasons to do it for ROW EXCLUSIVE mode lock (backend releases\nsuch locks only when commit/rollback[to savepoint]), but we have to\ndo it in some other cases - when we explicitly release acquired locks \nafter scan/statement is done. And so, you're right: in these cases\nwe have to track lock acquisitions. Well, we'll add new arg to\nLockAcquire (and other funcs; we have to do it anyway to implement \nNO WAIT, WAIT XXX secs locks) to flag lmgr that if the lock counter\nis not 0 (for 0s - i.e. first lock acquisition - command id will be \nremembered by lmgr anyway) than this counter must be preserved in \nimplicit savepoint. In the case of abort lock counters will be restored.\nSpace allocated in implicit savepoint will released.\n\nAll the above will work till there is no UNLOCK statement.\n\nThanks!\n\n> \n> And more questions.\n> \n> HEAP_MARKED_FOR_UPDATE state could be rollbacked ?\n\nYes. FOR UPDATE changes t_xmax and t_cmax.\n\nVadim\n",
"msg_date": "Thu, 17 Jun 1999 13:38:21 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Savepoints..."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > To have them I need to add tuple id (6 bytes) to heap tuple\n> > header. Are there objections? Though it's not good to increase\n> > tuple header size, subj is, imho, very nice feature...\n> \n> Gee, that's a lot of overhead. We would go from 40 bytes ->46 bytes.\n\n40? offsetof(HeapTupleHeaderData, t_bits) is 31...\n\nWell, seems that we can remove 5 bytes from tuple header.\n\n1. t_hoff (1 byte) may be computed - no reason to store it.\n2. we need in both t_cmin and t_cmax only when tuple is updated\n by the same xaction as it was inserted - in such cases we \n can put delete command id (t_cmax) to t_xmax and set\n flag HEAP_XMAX_THE_SAME (as t_xmin), in all other cases\n we will overwrite insert command id with delete command id\n (no one is interested in t_cmin of committed insert xaction)\n -> yet another 4 bytes (sizeof command id).\n\nIf now we'll add 6 bytes to header then \noffsetof(HeapTupleHeaderData, t_bits) will be 32 and for\nno-nulls tuples there will be no difference at all\n(with/without additional 6 bytes), due to double alignment\nof header. So, the choice is: new feature or more compact\n(than current) header for tuples with nulls.\n\n> \n> How is this different from the tid or oid? Reading your description, I\n\nt_ctid could be used but would require additional disk write.\n\n> see there probably isn't another way to do it.\n\nThere is one - WAL. I'm thinking about it, but it's too long story -:)\n\nBTW, additional tid in header would allow us to implement\nRI/U constraints without rules: knowing what tuples were changed\nwe could just read these tuples and perform checks. This would be\nfaster and don't require to store deffered rule plans in memory.\n\nI'm still like the idea of deffered rules, Jan - they allow\nto implement much more complex constraints than RI/U ones.\nThough, did you think about [deffered] statement level triggers \nimplementation, Jan? You are the best one who could make it, \nbecause of they are children of overwrite system and PL.\n\nVadim\n",
"msg_date": "Thu, 17 Jun 1999 15:50:42 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Savepoints..."
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > To have them I need to add tuple id (6 bytes) to heap tuple\n> > > header. Are there objections? Though it's not good to increase\n> > > tuple header size, subj is, imho, very nice feature...\n> > \n> > Gee, that's a lot of overhead. We would go from 40 bytes ->46 bytes.\n> \n> 40? offsetof(HeapTupleHeaderData, t_bits) is 31...\n\nYes, I saw this. I even updated the FAQ to show a 32-byte overhead.\n\n> Well, seems that we can remove 5 bytes from tuple header.\n\nI was hoping you could do something like this.\n\n> 1. t_hoff (1 byte) may be computed - no reason to store it.\n\nYes.\n\n> 2. we need in both t_cmin and t_cmax only when tuple is updated\n> by the same xaction as it was inserted - in such cases we \n> can put delete command id (t_cmax) to t_xmax and set\n> flag HEAP_XMAX_THE_SAME (as t_xmin), in all other cases\n> we will overwrite insert command id with delete command id\n> (no one is interested in t_cmin of committed insert xaction)\n> -> yet another 4 bytes (sizeof command id).\n\nGood.\n\n> \n> If now we'll add 6 bytes to header then \n> offsetof(HeapTupleHeaderData, t_bits) will be 32 and for\n> no-nulls tuples there will be no difference at all\n> (with/without additional 6 bytes), due to double alignment\n> of header. So, the choice is: new feature or more compact\n> (than current) header for tuples with nulls.\n\nThat's a tough one. What do other DB's have for row overhead?\n\n> > How is this different from the tid or oid? Reading your description, I\n> \n> t_ctid could be used but would require additional disk write.\n\nOK, I understand.\n\n> \n> > see there probably isn't another way to do it.\n> \n> There is one - WAL. I'm thinking about it, but it's too long story -:)\n\nOK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 09:41:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Savepoints..."
},
{
"msg_contents": "istm that this discussion and the one on the 1GB limit on table\nsegments could form the basis for a missing chapter on \"Data Storage\"\nin the Admin Guide. Would someone (other than Vadim, who we need to\nkeep coding! :) please keep following this and related threads and\nextract the info for the Admin Guide chapter? It doesn't need to be\nvery long, perhaps just suggesting how to calculate table storage\nsize, discussing upper limits (e.g. 32-bit OID), and describing the\ntable segmentation scheme. There is already a chapter (with more\ndetail than the AG needs) in the Developer's Guide which should be\nupdated too.\n\nAnyway, both chapters are enclosed; the originals are also in\n doc/src/sgml/{storage,page}.sgml)\nAll we really need is the info, and I can do the markup if whoever\npicks this up doesn't feel comfortable with trying the SGML markup.\n\nVolunteers appreciated...\n\n - Thomas\n\n> > > To have them I need to add tuple id (6 bytes) to heap tuple\n> > > header. Are there objections? Though it's not good to increase\n> > > tuple header size, subj is, imho, very nice feature...\n> > Gee, that's a lot of overhead. We would go from 40 bytes ->46 bytes.\n> 40? offsetof(HeapTupleHeaderData, t_bits) is 31...\n> Well, seems that we can remove 5 bytes from tuple header.\n> 1. t_hoff (1 byte) may be computed - no reason to store it.\n> 2. we need in both t_cmin and t_cmax only when tuple is updated\n> by the same xaction as it was inserted - in such cases we\n> can put delete command id (t_cmax) to t_xmax and set\n> flag HEAP_XMAX_THE_SAME (as t_xmin), in all other cases\n> we will overwrite insert command id with delete command id\n> (no one is interested in t_cmin of committed insert xaction)\n> -> yet another 4 bytes (sizeof command id).\n> If now we'll add 6 bytes to header then\n> offsetof(HeapTupleHeaderData, t_bits) will be 32 and for\n> no-nulls tuples there will be no difference at all\n> (with/without additional 6 bytes), due to double alignment\n> of header. So, the choice is: new feature or more compact\n> (than current) header for tuples with nulls.\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Sun, 20 Jun 1999 00:45:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Info on Data Storage"
}
] |
[
{
"msg_contents": "DECIMA/NUMERCI Report bug:\n\nCREATE TABLE Test (num DEC(7,2), flt8 FLOAT(15));\nERROR: Unable to locate type name 'dec' in catalog\n\n The required syntax for DECIMAL is:\n DEC[IMAL] [ ( precision [ ,scale ] ) ]\n\n from SQL/92 draft:\n <exact numeric type> ::=\n NUMERIC [ <left paren> <precision> [ <comma> <scale> ]\n<right pa\n | DECIMAL [ <left paren> <precision> [ <comma> <scale> ]\n<right pa\n | DEC [ <left paren> <precision> [ <comma> <scale> ]\n<right paren>\n\n Remarks: DECIMAL can be abbreviated as DEC.\n\n\nCREATE TABLE Test (num DECimal(7,2), flt8 FLOAT(15));\nCREATE\nINSERT INTO Test VALUES (1,1);\nINSERT 207242 1\nINSERT INTO Test VALUES (2.343,2.343);\nINSERT 207243 1\nINSERT INTO Test VALUES (-3.0,-3.0);\nINSERT 207244 1\nselect * from test;\n num| flt8\n-----+-----\n 1.00| 1\n 2.34|2.343\n-3.00| -3\n(3 rows)\n\n\n--numeric and decimal doesn't support arithmetic operations with\nfloats...\n\nSELECT num-flt8 FROM Test;\nERROR: Unable to identify an operator '-' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num+flt8 FROM Test;\nERROR: Unable to identify an operator '+' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num*flt8 FROM Test;\nERROR: Unable to identify an operator '*' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num/flt8 FROM Test;\nERROR: Unable to identify an operator '/' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT * FROM Test WHERE num < flt8;\nERROR: Unable to identify an operator '<' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\n\n\n--create operator doesn't know numeric/decimal type:\n\ncreate operator < (\n leftarg=numeric,\n rightarg=float8,\n procedure=dec_float8_lt\n );\nERROR: parser: parse error at or near \"numeric\"\n--\n___________________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n\nDECIMA/NUMERCI Report bug:\nCREATE TABLE Test (num DEC(7,2), flt8 FLOAT(15));\nERROR: Unable to locate type name 'dec' in catalog\n The required syntax for DECIMAL is:\n DEC[IMAL] [ ( precision [ ,scale ] ) ]\n from SQL/92 draft:\n <exact numeric type>\n::=\n \nNUMERIC [ <left paren> <precision> [ <comma> <scale> ] <right\npa\n \n| DECIMAL [ <left paren> <precision> [ <comma> <scale> ] <right\npa\n \n| DEC [ <left paren> <precision> [ <comma> <scale> ] <right\nparen>\n Remarks: DECIMAL\ncan be abbreviated as DEC.\n \nCREATE TABLE Test (num DECimal(7,2), flt8 FLOAT(15));\nCREATE\nINSERT INTO Test VALUES (1,1);\nINSERT 207242 1\nINSERT INTO Test VALUES (2.343,2.343);\nINSERT 207243 1\nINSERT INTO Test VALUES (-3.0,-3.0);\nINSERT 207244 1\nselect * from test;\n num| flt8\n-----+-----\n 1.00| 1\n 2.34|2.343\n-3.00| -3\n(3 rows)\n \n--numeric and decimal doesn't support arithmetic operations with\nfloats...\nSELECT num-flt8 FROM Test;\nERROR: Unable to identify an operator '-' for types 'numeric'\nand 'float8'\n You will have to retype\nthis query using an explicit cast\nSELECT num+flt8 FROM Test;\nERROR: Unable to identify an operator '+' for types 'numeric'\nand 'float8'\n You will have to retype\nthis query using an explicit cast\nSELECT num*flt8 FROM Test;\nERROR: Unable to identify an operator '*' for types 'numeric'\nand 'float8'\n You will have to retype\nthis query using an explicit cast\nSELECT num/flt8 FROM Test;\nERROR: Unable to identify an operator '/' for types 'numeric'\nand 'float8'\n You will have to retype\nthis query using an explicit cast\nSELECT * FROM Test WHERE num < flt8;\nERROR: Unable to identify an operator '<' for types 'numeric'\nand 'float8'\n You will have to retype\nthis query using an explicit cast\n \n--create operator doesn't know numeric/decimal type:\ncreate operator < (\n leftarg=numeric,\n rightarg=float8,\n procedure=dec_float8_lt\n );\nERROR: parser: parse error at or near \"numeric\"\n--\n___________________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'",
"msg_date": "Wed, 16 Jun 1999 15:37:17 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "decimal & numeric report bug"
},
{
"msg_contents": "Thanks for the report Jose'. Jan, can I help with this? A few of the\nitems could be fixed for v6.5.1, while others which touch system\ntables should wait until v6.6...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 14:25:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] decimal & numeric report bug"
}
] |
[
{
"msg_contents": "* subqueries containing HAVING return incorrect results\n\nselect istat from comuni where istat in (\nselect istat from comuni group by istat having count(istat) > 1\n);\nERROR: rewrite: aggregate column of view must be at rigth side in qual\n\nselect istat from comuni where istat in (\nselect istat from comuni group by istat having 1 < count(istat)\n);\nERROR: pull_var_clause: Cannot handle node type 108\n\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n",
"msg_date": "Wed, 16 Jun 1999 15:54:44 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "having bug report"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> * subqueries containing HAVING return incorrect results\n> select istat from comuni where istat in (\n> select istat from comuni group by istat having count(istat) > 1\n> );\n> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> select istat from comuni where istat in (\n> select istat from comuni group by istat having 1 < count(istat)\n> );\n> ERROR: pull_var_clause: Cannot handle node type 108\n\nThese are both known problems (at least, I had both in my todo list).\n\nThe first one appears to be a rewriter bug --- it seems to want to\nimplement count(istat) as a second nested sublink, and then it falls\nover because it doesn't handle \"subselect op something\" as opposed to\n\"something op subselect\". But pushing count(istat) into a subselect\nis not merely inefficient, it's *wrong* in this case because then the\ngroup by won't affect it.\n\nThe second one is a problem in the planner/optimizer; it falls over on\nsublinks in HAVING clauses (of course, this particular example wouldn't\ntrigger the problem were it not for the upstream rewriter bug, but it's\nstill a planner bug). I think union_planner's handling of sublinks\nneeds considerable work, but was putting it off till after 6.5.\n\nI will work on the second problem; I think the first one is in Jan's\nturf...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 11:06:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] having bug report "
},
{
"msg_contents": "Sorry. I re-sent this message because I don't see it in TODO file and I\nthougth it was fixed.\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > * subqueries containing HAVING return incorrect results\n> > select istat from comuni where istat in (\n> > select istat from comuni group by istat having count(istat) > 1\n> > );\n> > ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> > select istat from comuni where istat in (\n> > select istat from comuni group by istat having 1 < count(istat)\n> > );\n> > ERROR: pull_var_clause: Cannot handle node type 108\n>\n> These are both known problems (at least, I had both in my todo list).\n>\n> The first one appears to be a rewriter bug --- it seems to want to\n> implement count(istat) as a second nested sublink, and then it falls\n> over because it doesn't handle \"subselect op something\" as opposed to\n> \"something op subselect\". But pushing count(istat) into a subselect\n> is not merely inefficient, it's *wrong* in this case because then the\n> group by won't affect it.\n>\n> The second one is a problem in the planner/optimizer; it falls over on\n> sublinks in HAVING clauses (of course, this particular example wouldn't\n> trigger the problem were it not for the upstream rewriter bug, but it's\n> still a planner bug). I think union_planner's handling of sublinks\n> needs considerable work, but was putting it off till after 6.5.\n>\n> I will work on the second problem; I think the first one is in Jan's\n> turf...\n>\n> regards, tom lane\n\nJose'\n\n\n\nSorry. I re-sent this message because I don't see\nit in TODO file and I thougth it was fixed.\nTom Lane ha scritto:\n=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>\nwrites:\n> * subqueries containing HAVING return incorrect results\n> select istat from comuni where istat in (\n> select istat from comuni group by istat having count(istat) > 1\n> );\n> ERROR: rewrite: aggregate column of view must be at rigth side\nin qual\n> select istat from comuni where istat in (\n> select istat from comuni group by istat having 1 < count(istat)\n> );\n> ERROR: pull_var_clause: Cannot handle node type 108\nThese are both known problems (at least, I had both in my todo list).\nThe first one appears to be a rewriter bug --- it seems to want to\nimplement count(istat) as a second nested sublink, and then it falls\nover because it doesn't handle \"subselect op something\" as opposed\nto\n\"something op subselect\". But pushing count(istat) into a subselect\nis not merely inefficient, it's *wrong* in this case because then the\ngroup by won't affect it.\nThe second one is a problem in the planner/optimizer; it falls over\non\nsublinks in HAVING clauses (of course, this particular example wouldn't\ntrigger the problem were it not for the upstream rewriter bug, but\nit's\nstill a planner bug). I think union_planner's handling of sublinks\nneeds considerable work, but was putting it off till after 6.5.\nI will work on the second problem; I think the first one is in Jan's\nturf...\n \nregards, tom lane\nJose'",
"msg_date": "Wed, 16 Jun 1999 17:44:48 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] having bug report"
},
{
"msg_contents": "\nAny comments on that status of this one?\n\n> * subqueries containing HAVING return incorrect results\n> \n> select istat from comuni where istat in (\n> select istat from comuni group by istat having count(istat) > 1\n> );\n> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> \n> select istat from comuni where istat in (\n> select istat from comuni group by istat having 1 < count(istat)\n> );\n> ERROR: pull_var_clause: Cannot handle node type 108\n> \n> ______________________________________________________________\n> PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Jose'\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 17:27:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] having bug report"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any comments on that status of this one?\n\nThose particular cases are fixed, I think, but there are still severe\nproblems with VIEWs that use grouping or aggregates. I doubt we can\nimprove the VIEW situation much more without subselects-in-FROM.\n\n\t\t\tregards, tom lane\n\n\n>> * subqueries containing HAVING return incorrect results\n>> \n>> select istat from comuni where istat in (\n>> select istat from comuni group by istat having count(istat) > 1\n>> );\n>> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n>> \n>> select istat from comuni where istat in (\n>> select istat from comuni group by istat having 1 < count(istat)\n>> );\n>> ERROR: pull_var_clause: Cannot handle node type 108\n",
"msg_date": "Mon, 29 Nov 1999 21:15:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] having bug report "
}
] |
[
{
"msg_contents": "What does the spec have to say? It bothers me somewhat that an AVG is\nexpected to return an integer result at all. Isn't the Average of 1 and 2,\n1.5 not 1?\n\njust my $0.02,\n\t-DEJ\n\n> -----Original Message-----\n> From:\tTom Lane [SMTP:[email protected]]\n> Sent:\tWednesday, June 16, 1999 9:52 AM\n> To:\tThomas Lockhart\n> Cc:\tGene Sokolov; [email protected]\n> Subject:\tRe: [HACKERS] 6.5.0 - Overflow bug in AVG( ) \n> \n> Thomas Lockhart <[email protected]> writes:\n> >> Some broader solution should be considered though if you\n> >> want AVG to work on numeric/decimal as well.\n> \n> > The implementation can be specified for each datatype individually,\n> \n> In the current implementation, each datatype does use its own type as\n> the accumulator --- and also as the counter. float8 and numeric are\n> fine, float4 is sort of OK (a float8 accumulator would be better for\n> accuracy reasons), int4 loses, int2 loses *bad*.\n> \n> To fix it we'd need to invent operators that do the appropriate cross-\n> data-type operations. For example, int4 avg using float8 accumulator\n> would need \"float8 + int4 yielding float8\" and \"float8 / int4 yielding\n> int4\", neither of which are to be found in pg_proc at the moment. But\n> it's a straightforward thing to do.\n> \n> int8 is the only integer type that I wouldn't want to use a float8\n> accumulator for. Maybe numeric would be the appropriate thing here,\n> slow though it be.\n> \n> Note that switching over to float accumulation would *not* be real\n> palatable until we have fixed the memory-leak issue. avg() on int4\n> doesn't leak memory currently, but it would with a float accumulator...\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 10:21:28 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] 6.5.0 - Overflow bug in AVG( ) "
},
{
"msg_contents": "> What does the spec have to say? It bothers me somewhat that an AVG is\n> expected to return an integer result at all. Isn't the Average of 1 and 2,\n> 1.5 not 1?\n\nYeah, well, it's a holdover from the original Postgres code. We just\nhaven't made an effort to change it yet, but it seems a good candidate\nfor a makeover, no?\n\nI'm pretty sure that the spec would suggest a float8 return value for\navg(int), but I haven't looked recently to refresh my memory.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 15:29:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "\"Jackson, DeJuan\" <[email protected]> writes:\n> What does the spec have to say? It bothers me somewhat that an AVG is\n> expected to return an integer result at all. Isn't the Average of 1\n> and 2, 1.5 not 1?\n\nThat bothered me too. The draft spec that I have sez:\n\n b) If SUM is specified and DT is exact numeric with scale\n S, then the data type of the result is exact numeric with\n implementation-defined precision and scale S.\n\n c) If AVG is specified and DT is exact numeric, then the data\n type of the result is exact numeric with implementation-\n defined precision not less than the precision of DT and\n implementation-defined scale not less than the scale of DT.\n\n d) If DT is approximate numeric, then the data type of the\n result is approximate numeric with implementation-defined\n precision not less than the precision of DT.\n\n 65)Subclause 6.5, \"<set function specification>\": The precision of\n the value derived from application of the SUM function to a data\n type of exact numeric is implementation-defined.\n\n 66)Subclause 6.5, \"<set function specification>\": The precision and\n scale of the value derived from application of the AVG function\n to a data type of exact numeric is implementation-defined.\n\n 67)Subclause 6.5, \"<set function specification>\": The preci-\n sion of the value derived from application of the SUM func-\n tion or AVG function to a data type of approximate numeric is\n implementation-defined.\n\n\nThis would seem to give license for the result of AVG() on an int4 field\nto be NUMERIC with a fraction part, but not FLOAT. But I suspect we\ncould get away with making it be FLOAT anyway. Anyone know what other\ndatabases do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 11:30:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( ) "
},
{
"msg_contents": "> This would seem to give license for the result of AVG() on an int4 field\n> to be NUMERIC with a fraction part, but not FLOAT. But I suspect we\n> could get away with making it be FLOAT anyway.\n\nSure, that can't be worse in practice than what we do now. But it is\ninteresting that we are currently SQL92 conforming (except for that\nnasty overflow business; they probably don't mention that ;).\n\nFor int2/int4, we could bump the accumulator to int8 (certainly faster\nthan our numeric implementation?), but there are a very few platforms\nwhich don't support int8 and we shouldn't break the aggregates for\nthem. We could get around that by defining explicit routines to be\nused in the aggregates, and then having some #ifdef alternate code if\nint8 is not available...\n\nTom, do you think that a hack in the aggregate support code which\ncompares the pointer returned to the pointer input, then pfree'ing the\ninput area if they differ, would fix the major leakage? We could even\nhave a backend global variable which enables/disables the feature to\nallow performance tuning.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Jun 1999 15:47:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> For int2/int4, we could bump the accumulator to int8 (certainly faster\n> than our numeric implementation?), but there are a very few platforms\n> which don't support int8 and we shouldn't break the aggregates for\n> them.\n\nRight, that's why I preferred the idea of using float8.\n\nNote that any reasonable floating-point implementation will deliver an\nexact result for the sum of integer inputs, up to the point at which the\nsum exceeds the number of mantissa bits in a float (2^52 or so in IEEE\nfloat8). After that you start to lose accuracy. Using int8 would give\nan exact sum up to 2^63, but if we want to start delivering a fractional\naverage then float still looks like a better deal...\n\n> Tom, do you think that a hack in the aggregate support code which\n> compares the pointer returned to the pointer input, then pfree'ing the\n> input area if they differ, would fix the major leakage?\n\nYeah, that would probably work OK, although you'd have to be careful of\nthe initial condition --- is the initial value always safely pfreeable?\n\n> We could even have a backend global variable which enables/disables\n> the feature to allow performance tuning.\n\nSeems unnecessary.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 12:08:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( ) "
},
{
"msg_contents": "PostgreSQL:\n^^^^^^^^^^^\nprova=> select min(a), max(a), avg(a) from aa;\nmin|max|avg\n---+---+---\n 1| 2| 1\n(1 row)\n\n\ninformix:----------- hygea@hygea ------------ Press CTRL-W for Help --------\n\n^^^^^^^^^\n (min) (max) (avg)\n\n 1 2 1.50000000000000\n\noracle:\n^^^^^^^\nSQL> select min(a), max(a), avg(a) from aa;\n\n MIN(A) MAX(A) AVG(A)\n---------- ---------- ----------\n 1 2 1.5\n\n\n\n\nTom Lane ha scritto:\n\n> \"Jackson, DeJuan\" <[email protected]> writes:\n> > What does the spec have to say? It bothers me somewhat that an AVG is\n> > expected to return an integer result at all. Isn't the Average of 1\n> > and 2, 1.5 not 1?\n>\n> That bothered me too. The draft spec that I have sez:\n>\n> b) If SUM is specified and DT is exact numeric with scale\n> S, then the data type of the result is exact numeric with\n> implementation-defined precision and scale S.\n>\n> c) If AVG is specified and DT is exact numeric, then the data\n> type of the result is exact numeric with implementation-\n> defined precision not less than the precision of DT and\n> implementation-defined scale not less than the scale of DT.\n>\n> d) If DT is approximate numeric, then the data type of the\n> result is approximate numeric with implementation-defined\n> precision not less than the precision of DT.\n>\n> 65)Subclause 6.5, \"<set function specification>\": The precision of\n> the value derived from application of the SUM function to a data\n> type of exact numeric is implementation-defined.\n>\n> 66)Subclause 6.5, \"<set function specification>\": The precision and\n> scale of the value derived from application of the AVG function\n> to a data type of exact numeric is implementation-defined.\n>\n> 67)Subclause 6.5, \"<set function specification>\": The preci-\n> sion of the value derived from application of the SUM func-\n> tion or AVG function to a data type of approximate numeric is\n> implementation-defined.\n>\n> This would seem to give license for the result of AVG() on an int4 field\n> to be NUMERIC with a fraction part, but not FLOAT. But I suspect we\n> could get away with making it be FLOAT anyway. Anyone know what other\n> databases do?\n>\n> regards, tom lane\n\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n\nPostgreSQL:\n^^^^^^^^^^^\nprova=> select min(a), max(a), avg(a) from aa;\nmin|max|avg\n---+---+---\n 1| 2| 1\n(1 row)\n \ninformix:----------- hygea@hygea ------------ Press CTRL-W for Help\n--------\n^^^^^^^^^\n (min) \n(max) \n(avg)\n 1 \n2 \n1.50000000000000\noracle:\n^^^^^^^\nSQL> select min(a), max(a), avg(a) from aa;\n MIN(A) \nMAX(A) AVG(A)\n---------- ---------- ----------\n 1 \n2 1.5\n \n \n \nTom Lane ha scritto:\n\"Jackson, DeJuan\" <[email protected]> writes:\n> What does the spec have to say? It bothers me somewhat that\nan AVG is\n> expected to return an integer result at all. Isn't the Average\nof 1\n> and 2, 1.5 not 1?\nThat bothered me too. The draft spec that I have sez:\n b) If SUM is specified and DT is exact numeric with scale\n S, then the data type of the result is exact\nnumeric with\n implementation-defined precision and scale\nS.\n c) If AVG is specified and DT is exact numeric, then the\ndata\n type of the result is exact numeric with implementation-\n defined precision not less than the precision\nof DT and\n implementation-defined scale not less than\nthe scale of DT.\n d) If DT is approximate numeric, then the data type of\nthe\n result is approximate numeric with implementation-defined\n precision not less than the precision of DT.\n 65)Subclause 6.5, \"<set function specification>\": The precision\nof\n the value derived from application of the\nSUM function to a data\n type of exact numeric is implementation-defined.\n 66)Subclause 6.5, \"<set function specification>\": The precision\nand\n scale of the value derived from application\nof the AVG function\n to a data type of exact numeric is implementation-defined.\n 67)Subclause 6.5, \"<set function specification>\": The preci-\n sion of the value derived from application\nof the SUM func-\n tion or AVG function to a data type of approximate\nnumeric is\n implementation-defined.\nThis would seem to give license for the result of AVG() on an int4 field\nto be NUMERIC with a fraction part, but not FLOAT. But I suspect\nwe\ncould get away with making it be FLOAT anyway. Anyone know what\nother\ndatabases do?\n \nregards, tom lane\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'",
"msg_date": "Wed, 16 Jun 1999 18:12:11 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
},
{
"msg_contents": "On Wed, 16 Jun 1999, [iso-8859-1] Jos� Soares wrote:\n\n> PostgreSQL:\n> ^^^^^^^^^^^\n> prova=> select min(a), max(a), avg(a) from aa;\n> min|max|avg\n> ---+---+---\n> 1| 2| 1\n> (1 row)\n> \n> \n\nSybase - I'm guessing/ass-u-me ing it's around version 4.9\n\n1> select min(a), max(a), avg(a) from aa\n2> go\n \n ----------- ----------- ----------- \n 1 2 1 \n\n(1 row affected)\n1> \n\n> > This would seem to give license for the result of AVG() on an int4 field\n> > to be NUMERIC with a fraction part, but not FLOAT. But I suspect we\n> > could get away with making it be FLOAT anyway. Anyone know what other\n> > databases do?\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 17 Jun 1999 06:47:11 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.0 - Overflow bug in AVG( )"
}
] |
[
{
"msg_contents": "I see there's still the command:\n\nSET QUERY_LIMIT (even in the docs)\nbut it doesn't work.\nprova=> select * from test limit 1;\n num|flt8\n----+----\n1.00| 1\n(1 row)\n\nprova=>\nprova=> set query_limit = '1';\nSET VARIABLE\nprova=> select * from test;\n num| flt8\n-----+-----\n 1.00| 1\n 2.34|2.343\n-3.00| -3\n(3 rows)\n\nprova=> show query_limit;\nNOTICE: query limit is 1\nSHOW VARIABLE\n\n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n",
"msg_date": "Wed, 16 Jun 1999 17:28:39 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "SET QUERY_LIMIT bug report"
},
{
"msg_contents": "> I see there's still the command:\n> \n> SET QUERY_LIMIT (even in the docs)\n> but it doesn't work.\n> prova=> select * from test limit 1;\n> num|flt8\n> ----+----\n> 1.00| 1\n> (1 row)\n> \n\nYep, broken:\n\t\n\ttest=> set query_limit = '1';\n\tSET VARIABLE\n\ttest=> select * from pg_language;\n\tlanname |lanispl|lanpltrusted|lanplcallfoid|lancompiler \n\t--------+-------+------------+-------------+--------------\n\tinternal|f |f | 0|n/a \n\tlisp |f |f | 0|/usr/ucb/liszt\n\tC |f |f | 0|/bin/cc \n\tsql |f |f | 0|postgres \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Jun 1999 11:40:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report"
},
{
"msg_contents": "I had the idea we were going to remove QUERY_LIMIT now that we have\nthe LIMIT clause? There were good arguments advanced that QUERY_LIMIT\nis actually dangerous, since it could (for example) prevent trigger\nrules from operating as intended.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 11:56:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report "
},
{
"msg_contents": "> I had the idea we were going to remove QUERY_LIMIT now that we have\n> the LIMIT clause? There were good arguments advanced that QUERY_LIMIT\n> is actually dangerous, since it could (for example) prevent trigger\n> rules from operating as intended.\n> \n> \t\t\tregards, tom lane\n> \n\n\nOK. Easily removed, especially since it doesn't work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Jun 1999 11:58:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report"
},
{
"msg_contents": ">I had the idea we were going to remove QUERY_LIMIT now that we have\n>the LIMIT clause? There were good arguments advanced that QUERY_LIMIT\n>is actually dangerous, since it could (for example) prevent trigger\n>rules from operating as intended.\n\nYes. We should remove set query_limit before 6.5.1...\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 17 Jun 1999 07:01:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report "
},
{
"msg_contents": "> >I had the idea we were going to remove QUERY_LIMIT now that we have\n> >the LIMIT clause? There were good arguments advanced that QUERY_LIMIT\n> >is actually dangerous, since it could (for example) prevent trigger\n> >rules from operating as intended.\n> \n> Yes. We should remove set query_limit before 6.5.1...\n> --\n> Tatsuo Ishii\n> \n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Jun 1999 18:03:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report"
},
{
"msg_contents": "Bruce Momjian ha scritto:\n\n> > I had the idea we were going to remove QUERY_LIMIT now that we have\n> > the LIMIT clause? There were good arguments advanced that QUERY_LIMIT\n> > is actually dangerous, since it could (for example) prevent trigger\n> > rules from operating as intended.\n> >\n> > regards, tom lane\n> >\n>\n> OK. Easily removed, especially since it doesn't work.\n>\n\nWe need to remove it even from docs.\n(psql help, user guide and man pages)\n\nJose'\n\n\n",
"msg_date": "Thu, 17 Jun 1999 15:14:48 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report"
},
{
"msg_contents": "> Bruce Momjian ha scritto:\n> \n> > > I had the idea we were going to remove QUERY_LIMIT now that we have\n> > > the LIMIT clause? There were good arguments advanced that QUERY_LIMIT\n> > > is actually dangerous, since it could (for example) prevent trigger\n> > > rules from operating as intended.\n> > >\n> > > regards, tom lane\n> > >\n> >\n> > OK. Easily removed, especially since it doesn't work.\n> >\n> \n> We need to remove it even from docs.\n> (psql help, user guide and man pages)\n\nYes, that will be done too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 09:48:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SET QUERY_LIMIT bug report"
}
] |
[
{
"msg_contents": "Hi,\n\n\there is a new version of the bitmask type. It supports hash-indices as\nwell now, and fixes a bug in the definition of the <> operator.\n\nI would appreciate it if somebody more knowledgable than myself would\nlook over the index definitions. They seem to work and are used by\npostgres, so I guess they can't be all wrong. The hashing function is\nthe same as that for char's and comes straight out of the postgres\nsource code.\n\nBTW, chapter 36 of the documentation could do with some additions, but I\ndon't feel knowledgable enough to attempt it. E.g. it shows how to put\nan entry for the hashing into pg_amop, but never explains how to define\nthe entry in pg_amproc and doesn't tell you that you need to define a\nseparate hashing function. It took me a while of looking through the\nother definitions and digging through the source code to come up with a\nbest guess.\n\nPerhaps this could go into the contrib area if it passes muster, as it\nis an example of a user-defined type with indices.\n\nCheers,\n\nAdriaan",
"msg_date": "Wed, 16 Jun 1999 18:38:02 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update of bitmask type"
},
{
"msg_contents": "Can I get comments on this? Is a bit type something we want installed\nby default, or in contrib? Seems to me it should be in the main tree.\n\n\n> Hi,\n> \n> \there is a new version of the bitmask type. It supports hash-indices as\n> well now, and fixes a bug in the definition of the <> operator.\n> \n> I would appreciate it if somebody more knowledgable than myself would\n> look over the index definitions. They seem to work and are used by\n> postgres, so I guess they can't be all wrong. The hashing function is\n> the same as that for char's and comes straight out of the postgres\n> source code.\n> \n> BTW, chapter 36 of the documentation could do with some additions, but I\n> don't feel knowledgable enough to attempt it. E.g. it shows how to put\n> an entry for the hashing into pg_amop, but never explains how to define\n> the entry in pg_amproc and doesn't tell you that you need to define a\n> separate hashing function. It took me a while of looking through the\n> other definitions and digging through the source code to come up with a\n> best guess.\n> \n> Perhaps this could go into the contrib area if it passes muster, as it\n> is an example of a user-defined type with indices.\n> \n> Cheers,\n> \n> Adriaan\n\n[application/x-gzip is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 17:00:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Bruce Momjian wrote:\n\n> Can I get comments on this? Is a bit type something we want installed\n> by default, or in contrib? Seems to me it should be in the main tree.\n\nfirst...what is a bitmask type? :)\n\n> > Hi,\n> > \n> > \there is a new version of the bitmask type. It supports hash-indices as\n> > well now, and fixes a bug in the definition of the <> operator.\n> > \n> > I would appreciate it if somebody more knowledgable than myself would\n> > look over the index definitions. They seem to work and are used by\n> > postgres, so I guess they can't be all wrong. The hashing function is\n> > the same as that for char's and comes straight out of the postgres\n> > source code.\n> > \n> > BTW, chapter 36 of the documentation could do with some additions, but I\n> > don't feel knowledgable enough to attempt it. E.g. it shows how to put\n> > an entry for the hashing into pg_amop, but never explains how to define\n> > the entry in pg_amproc and doesn't tell you that you need to define a\n> > separate hashing function. It took me a while of looking through the\n> > other definitions and digging through the source code to come up with a\n> > best guess.\n> > \n> > Perhaps this could go into the contrib area if it passes muster, as it\n> > is an example of a user-defined type with indices.\n> > \n> > Cheers,\n> > \n> > Adriaan\n> \n> [application/x-gzip is not supported, skipping...]\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 22 Sep 1999 02:46:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 21 Sep 1999, Bruce Momjian wrote:\n> \n> > Can I get comments on this? Is a bit type something we want installed\n> > by default, or in contrib? Seems to me it should be in the main tree.\n> \n> first...what is a bitmask type? :)\n\nIn this case just a single byte in which you can store states quite\neasily. It supports the C-style bit operations & (and) ,| (or, couldn't\nget it defined as a single bar though, because the parser didn't like\nit) ,^ (xor),! (not). For some applications it is just easier to check\nwhether certain bits are set/not set. \n\nIf somebody tells me what needs doing, I could try to get it all into a\nmore usable format. And I have no clue what SQL3 says about bit-types\n(varying bits or something or other?) At the moment it is just a single\nbyte, and perhaps it needs extension to 2 byte, 4-byte types. \n\nAdriaan\n",
"msg_date": "Wed, 22 Sep 1999 09:20:50 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> > > Can I get comments on this? Is a bit type something we want installed\n> > > by default, or in contrib? Seems to me it should be in the main tree.\n> In this case just a single byte in which you can store states quite\n> easily. It supports the C-style bit operations & (and) ,| (or, couldn't\n> get it defined as a single bar though, because the parser didn't like\n> it) ,^ (xor),! (not). For some applications it is just easier to check\n> whether certain bits are set/not set.\n\nAs long as it is limited to a single byte, perhaps it should prove\nitself in contrib. However, SQL92 has bit types, and it would be nice\nto get full support for them (and beyond, as this already is doing :)\n\n> If somebody tells me what needs doing, I could try to get it all into a\n> more usable format. And I have no clue what SQL3 says about bit-types\n> (varying bits or something or other?) At the moment it is just a single\n> byte, and perhaps it needs extension to 2 byte, 4-byte types.\n\nI don't have time right now to type up a short summary, but can do\nthat later if you like. But the data entry for an SQL92 bit type looks\nlike\n\n B'10111'\n X'17'\n\nThe underlying data type is BIT(n), a fixed-length type where n is the\nexact number of bits. BIT VARYING (n) allows a variable number of bits\n(duh!) up to n bits. We can support these SQL92 constructs in the\nparser, folding them into an internal type as we do for character\nstrings currently.\n\nIt could be implemented just like the character types, having a header\non the internal representation which holds the length. It can't re-use\nthe character type support functions as-is, since they currently\nconsider a zero byte in the string as end-of-string.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 22 Sep 1999 15:32:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> \n> I don't have time right now to type up a short summary, but can do\n> that later if you like. But the data entry for an SQL92 bit type looks\n> like\n> \n> B'10111'\n> X'17'\n> \n> The underlying data type is BIT(n), a fixed-length type where n is the\n> exact number of bits. BIT VARYING (n) allows a variable number of bits\n> (duh!) up to n bits. We can support these SQL92 constructs in the\n> parser, folding them into an internal type as we do for character\n> strings currently.\n> \n> It could be implemented just like the character types, having a header\n> on the internal representation which holds the length. It can't re-use\n> the character type support functions as-is, since they currently\n> consider a zero byte in the string as end-of-string.\n\n\nOK, I'll have a go at this as I get a chance. If somebody has the SQL\nstandard on line and could send me the appropriate sections I would\nappreciate it.\n\nAs I know very little about the postgres internals I would also\nappreciate a short roadmap as to what needs to be done where, i.e. does\nthe parser need to be changed, and where the files are /new files hsould\ngo that I need to update. If this is somewhere in the docs please point\nme to it.\n\nWhat I've found upto now is\n\nbackend/utils/adt/varlena.c\nbackend/utils/adt/varchar.c\n\nwhich I will use as starting point?\n\nI found the file src/backend/lib/bit.c (Bruce's according to the log\nmessage). Has that got anything to do with bit arrays?\n\nCheers,\n\nAdriaan\n",
"msg_date": "Thu, 23 Sep 1999 11:28:00 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> OK, I'll have a go at this as I get a chance. If somebody has the SQL\n> standard on line and could send me the appropriate sections I would\n> appreciate it.\n\nI have a text version of the SQL92 draft standard. Let me know if you\nwant the whole thing.\n\n> As I know very little about the postgres internals I would also\n> appreciate a short roadmap as to what needs to be done where, i.e. does\n> the parser need to be changed, and where the files are /new files hsould\n> go that I need to update. If this is somewhere in the docs please point\n> me to it.\n> What I've found upto now is\n> backend/utils/adt/varlena.c\n> backend/utils/adt/varchar.c\n> which I will use as starting point?\n\nThat's probably the right place to look. I'll help with the parser\nissues; the first thing to do is to figure out the appropriate\nbehavior and implement the underlying types. Then we can modify the\nparser (backend/parser/gram.y) to support SQL92->Postgres internal\ntype syntax, just as is done for char and numeric types.\n\n> I found the file src/backend/lib/bit.c (Bruce's according to the log\n> message). Has that got anything to do with bit arrays?\n\nYes it does, but not as a user-accessible type. btw, if you go by the\ncvs logs, Bruce owns *every* file in the tree since he does wholesale\nreformatting on files; in this case the code has been there since the\nbeginning.\n\nLooks like it might be a good start at some underlying utilities for\nwhat you want though, and it is OK to reuse them.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 23 Sep 1999 13:57:16 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Thomas asked how I was going to implement bitstring comparisons.\n> \n> How do you handle the length and ordering issues? Is x'01' greater\n> than x'1' since it is longer? And if you have a 16-bit bit column, how\n> does it look internally if you assign x'01' rather than x'0001'?\n\nI had a look in my freshly down-loaded draft standard. On page 336 it\nsays:\n\n 7) The comparison of two bit string values, X and Y, is\ndetermined\n by comparison of their bits with the same ordinal position.\n If Xi and Yi are the values of the i-th bits of X and Y,\n respectively, and if LX is the length in bits of X and LY is\n the length in bits of Y, then:\n\n a) X is equal to Y if and only if X = LY and Xi = Yi for all\ni.\n\n? I presume this should be 'LX=LY' ?? Anyway, this means that b'01' <>\nb'0010'.\n\n b) X is less than Y if and only if:\n\n i) LX < LY and Xi = Yi for all i less than or equal to LX;\nor\n\n ii) Xi = Yi for all i < n and Xn = 0 and Yn = 1 for some n\nless\n than or equal to the minimum of LX and LY.\n\nb) seems to imply, rather bizarrely in my opinion, that\n\n\tB'001100' < B'10'\n\n as the second bit in B'10' is 1 and in B'001100' it is 0.\n\n Surely I must be reading this wrong? \n\n On the other hand, this would be a type of lexicographical ordering,\nso \n perhaps it is not so dumb. Comments?\n\nAdriaan\n",
"msg_date": "Thu, 23 Sep 1999 19:20:49 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Adriaan Joubert wrote:\n> \n> b) seems to imply, rather bizarrely in my opinion, that\n> \n> B'001100' < B'10'\n>\nMaybe you start counting from the wrong end ?\n\nJust use them as you use char()\n\n'AABBAA' < 'BA'\n\nDoes it say something in the standard about direction,\nis it left-> right or right->left ?\n\n------------\nHannu\n",
"msg_date": "Fri, 24 Sep 1999 00:04:59 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Adriaan Joubert wrote:\n> >\n> > b) seems to imply, rather bizarrely in my opinion, that\n> >\n> > B'001100' < B'10'\n> >\n> Maybe you start counting from the wrong end ?\n> \n> Just use them as you use char()\n> \n> 'AABBAA' < 'BA'\n> \n> Does it say something in the standard about direction,\n> is it left-> right or right->left ?\n\n\nNo, not that I could find. But in the above example B'001100' < B'10'\nwhichever end you start counting from, as 1>0. I have no particularly\nstrong opinion on which way round it should be done -- perhaps we should\njust try to be consistent with other databases? Could somebody who has\naccess to Oracle or Sybase please do a few tests and let me know?\n\nA second problem I encountered last night is that the postgres variable\nlength types only allow for the length of an array to be stored in\nbytes. This means that the number of bits will automatically always be\nrounded up to the nearest factor of 8, i.e. you want tp store 3 bits and\nyou get 8. For ordering and output this is not always going to produce\nthe correct output, as the bitstrings will get zero-padded. Is there\nanywhere else where one could store the exact length of a bit string?\n\n I haven't quite understood what the variable attypmod is. In varchar.c\nit looks as if it is the length of the record, but if it is just an\ninteger identifier, then I could store the exact length in there. In\nthat case I could handle the difference between 3 and 5 bit strings\ncorrectly. My main worry was that this might be used in other routines\nto determine the length of a record.\n\nAdriaan\n",
"msg_date": "Fri, 24 Sep 1999 08:23:18 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Adriaan Joubert ha scritto:\n\n> Hannu Krosing wrote:\n> >\n> > Adriaan Joubert wrote:\n> > >\n> > > b) seems to imply, rather bizarrely in my opinion, that\n> > >\n> > > B'001100' < B'10'\n> > >\n> > Maybe you start counting from the wrong end ?\n> >\n> > Just use them as you use char()\n> >\n> > 'AABBAA' < 'BA'\n> >\n> > Does it say something in the standard about direction,\n> > is it left-> right or right->left ?\n>\n> No, not that I could find. But in the above example B'001100' < B'10'\n> whichever end you start counting from, as 1>0. I have no particularly\n> strong opinion on which way round it should be done -- perhaps we should\n> just try to be consistent with other databases? Could somebody who has\n> access to Oracle or Sybase please do a few tests and let me know?\n>\n\nOracle doesn't have this data type neither Informix. I think it is hard to\nfind this data type in any database.\nI found this feature in the OCELOT database\nYou can download it from:\nhttp://ourworld.compuserve.com/homepages/OCELOTSQL/\nAs they say:\n\"Ocelot makes the only Database Management System (DBMS) that supports the\nfull ANSI / ISO\nSQL Standard (1992), and an always-growing checklist of SQL3 features (also\nknown as SQL-99).\"\n\n\n\nA second problem I encountered last night is that the postgres variable\n\n> length types only allow for the length of an array to be stored in\n> bytes. This means that the number of bits will automatically always be\n> rounded up to the nearest factor of 8, i.e. you want tp store 3 bits and\n> you get 8. For ordering and output this is not always going to produce\n> the correct output, as the bitstrings will get zero-padded. Is there\n> anywhere else where one could store the exact length of a bit string?\n>\n> I haven't quite understood what the variable attypmod is. In varchar.c\n> it looks as if it is the length of the record, but if it is just an\n> integer identifier, then I could store the exact length in there. In\n> that case I could handle the difference between 3 and 5 bit strings\n> correctly. My main worry was that this might be used in other routines\n> to determine the length of a record.\n>\n> Adriaan\n>\n> ************\n\n",
"msg_date": "Fri, 24 Sep 1999 15:01:19 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> A second problem I encountered last night is that the postgres variable\n> length types only allow for the length of an array to be stored in\n> bytes. This means that the number of bits will automatically always be\n> rounded up to the nearest factor of 8, i.e. you want tp store 3 bits and\n> you get 8. For ordering and output this is not always going to produce\n> the correct output, as the bitstrings will get zero-padded. Is there\n> anywhere else where one could store the exact length of a bit string?\n\nattypmod has been modified recently to contain two fields (each of 16\nbits) in a backward-compatible way. It can hold the size *and*\nprecision of the numeric data types, and presumably should be used in\na similar manner for your bit type.\n\nThe problem is that you need another field which contains a length in\nbit units. Assuming that the second field in attypmod can't be used\nfor this purpose, then istm that you will want to add a field to the\ndata type itself. The character types have:\n\n length - total size of data, in bytes (4 bytes)\n data - body\n\nand you might have\n\n length - total size of data, in bytes (4 bytes)\n blen - total size of data, in bits (4 bytes)\n data - body\n\n - Thomas\n \n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 24 Sep 1999 14:21:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> A second problem I encountered last night is that the postgres variable\n> length types only allow for the length of an array to be stored in\n> bytes. This means that the number of bits will automatically always be\n> rounded up to the nearest factor of 8, i.e. you want tp store 3 bits and\n> you get 8. For ordering and output this is not always going to produce\n> the correct output, as the bitstrings will get zero-padded. Is there\n> anywhere else where one could store the exact length of a bit string?\n\nYou will need to put it right in the string, I think. You could\ndedicate the first byte of the value of a bitstring (after the required\nvarlena length word) to indicating how many bits in the last byte are\nwasted padding (0-7). That would leave a few spare bits in this header\nbyte that might or might not have any good use.\n\n> I haven't quite understood what the variable attypmod is. In varchar.c\n> it looks as if it is the length of the record, but if it is just an\n> integer identifier, then I could store the exact length in there. In\n> that case I could handle the difference between 3 and 5 bit strings\n> correctly. My main worry was that this might be used in other routines\n> to determine the length of a record.\n\natttypmod is a type-specific modifier: if you are developing a new data\ntype then you can define it any way you darn please. However, it's not\nquite as useful as it first appears, because it is only stored in\nconnection with a column of a table --- there is no atttypmod associated\nwith the result of a function, for example. It is primarily useful if\nyou want to be able to coerce values into a common subformat when they\nare stored into a column. For example, fixed-length char(n) types use\natttypmod as the column width so that they can pad or truncate a\nsupplied string to the right length just before storing. But a\nfree-standing string value does not have an atttypmod, only a length.\nSimilar remarks apply to NUMERIC, which uses atttypmod to store the\ndesired precision for a column, but not to figure out the actual\nprecision of a value in memory. In short, your datatype representation\nneeds to be self-identifying without help from atttypmod.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Sep 1999 11:06:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> attypmod has been modified recently to contain two fields (each of 16\n> bits) in a backward-compatible way. It can hold the size *and*\n> precision of the numeric data types, and presumably should be used in\n> a similar manner for your bit type.\n> \n> The problem is that you need another field which contains a length in\n> bit units. Assuming that the second field in attypmod can't be used\n> for this purpose, then istm that you will want to add a field to the\n> data type itself. The character types have:\n> \n> length - total size of data, in bytes (4 bytes)\n> data - body\n> \n> and you might have\n> \n> length - total size of data, in bytes (4 bytes)\n> blen - total size of data, in bits (4 bytes)\n> data - body\n\nOK, I just saw th email from Tom Lane as well. So I will use attypmod as\nthe length of the bit string in bits, and use an additional byte, as\nsuggested here, for the actual length.\n\nJose recommended looking at the Ocelot database and I got it down. Turns\nout they have a real big problem with the output of bit strings, but at\nleast I could figure out how they do the ordering. Looks as if it is\nlexicographically from the least significant bit, i.e.\n\n B'1' > B'10' > B'1100'\n\nthe only surprising thing was that they then have B'1000' > B'01000',\nand my reading of the SQL standard says that it should be the other way\nround. So I will just do the comparison from the least significant bit.\n\nCheers,\n\nAdriaan\n",
"msg_date": "Fri, 24 Sep 1999 18:27:36 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> attypmod has been modified recently to contain two fields (each of 16\n> bits) in a backward-compatible way. It can hold the size *and*\n> precision of the numeric data types, and presumably should be used in\n> a similar manner for your bit type.\n\nYou can use a union to split atttypmod up into two 8-bit fields and on\n16-bit field. Let me know if you need help.\n\n\n> \n> The problem is that you need another field which contains a length in\n> bit units. Assuming that the second field in attypmod can't be used\n> for this purpose, then istm that you will want to add a field to the\n> data type itself. The character types have:\n> \n> length - total size of data, in bytes (4 bytes)\n> data - body\n> \n> and you might have\n> \n> length - total size of data, in bytes (4 bytes)\n> blen - total size of data, in bits (4 bytes)\n> data - body\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:44:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Hi,\n\n\tOK, I've finally gotten round to coding most of the functions for bit\nstrings. I've got I/O, concatenation, substring and all bit operations\n(and, or, xor, not, shift) working on the bitstring data structures. A\nfew (probably pretty daft) questions (if there is documentation on this\nsomewhere, please point me to it):\n\n1. In the varchar file there are some functions which I believe are for\nthe conversion of char(n) to char(m). They take as argument a pointer to\na char() and a len which is the length of the total data structure. I\nhaven't figured out how conversions are implemented within postgres, but\nI would need to transfer the equivalent of an atttypmod value, which\nwould contain the length of the bit string to do the conversions. Does\nthis fit in with the way the rest of the system works?\n\n char * zpbit (char * arg, int32 bitlen)\n\n2. there is a function _bpchar, which has something to do with arrays,\nbut I can't see how it fits in with everything else.\n\n3. I need to write a hash function for bitstrings. I know nothing about\nhash functions, except that they are hard to do well. I looked at the\nfunction for text hashes and that is some weird code (i.e. it took me a\nwhile to figure out what it did). Does anybody have any suggestions\noff-hand for a decent hash function for bit strings? Could I just use\nthe text hash function? (Seems to me text should be different as it\nusually draws from a more restricted range than a bit string).\n\n4. Now that I've got the functionality, can somebody give me a rough\nroadmap to what I need to change to turn this into a proper postgres\ntype? As far as I can see I need to assign oid's to it in pg_type.h and\nI'll have to have a look at the parser to get it to recognise the types.\nIt would be a big help though if somebody could tell me what else needs\nto change.\n\nThanks,\n\nAdriaan\n",
"msg_date": "Sat, 09 Oct 1999 12:22:34 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> 4. Now that I've got the functionality, can somebody give me a rough\n> roadmap to what I need to change to turn this into a proper postgres\n> type? As far as I can see I need to assign oid's to it in pg_type.h and\n> I'll have to have a look at the parser to get it to recognise the types.\n> It would be a big help though if somebody could tell me what else needs\n> to change.\n\nI can integrate the type for you into the include/catalog files if\neveryone agrees they want it as a standard type and not an contrib type.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Oct 1999 06:20:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> 1. In the varchar file there are some functions which I believe are for\n> the conversion of char(n) to char(m). They take as argument a pointer to\n> a char() and a len which is the length of the total data structure. I\n> haven't figured out how conversions are implemented within postgres, but\n> I would need to transfer the equivalent of an atttypmod value, which\n> would contain the length of the bit string to do the conversions.\n\nbpchar(), for example, is actually a user-callable SQL function; it\ntakes a char(n) value and an atttypmod value and coerces the string\nto the right length for that atttypmod. Although there are no *direct*\nreferences to bpchar() anywhere except in pg_proc, the parser's\nSizeTargetExpr routine nonetheless generates calls to it as part of\nINSERT and UPDATE queries:\n\n/*\n * SizeTargetExpr()\n *\n * If the target column type possesses a function named for the type\n * and having parameter signature (columntype, int4), we assume that\n * the type requires coercion to its own length and that the said\n * function should be invoked to do that.\n *\n * Currently, \"bpchar\" (ie, char(N)) is the only such type, but try\n * to be more general than a hard-wired test...\n */\n\nSo, if you want to implement a fixed-length BIT(N) type, the only\nreal difference between that and an any-width bitstring is the existence\nof a coercion function matching SizeTargetExpr's criteria.\n\nBTW, the last line of that comment is in error --- \"varchar\" also has a\nfunction matching SizeTargetExpr's criteria. Its function behaves\na little differently, since it only truncates and never pads, but\nthe interface to the system is the same.\n\n> 2. there is a function _bpchar, which has something to do with arrays,\n> but I can't see how it fits in with everything else.\n\nLooks like it is the equivalent of bpchar() for arrays of char(N).\n\n> 3. I need to write a hash function for bitstrings. I know nothing about\n> hash functions, except that they are hard to do well. I looked at the\n> function for text hashes and that is some weird code (i.e. it took me a\n> while to figure out what it did).\n\nIf you're looking at the type-specific hash functions in hashfunc.c,\nI think they are mostly junk. They could all be replaced by two\nfunctions, one for pass-by-val types and one for pass-by-ref types, a la\nthe type-independent hashFunc() in nodeHash.c.\n\nThe only situation where you really need a type-specific hasher is with\ndatatypes that have garbage bits in them (such as padding between struct\nelements that might contain uninitialized bits). If you're careful to\nmake sure that all unused bits are zeroes, so that logically equivalent\nvalues of your type will always have the same bit contents, then you\nshould be able to just use hashtext().\n\nActually, unless you feel a compelling need to support hash indexes\non your datatype, you don't need a hash routine at all. Certainly\ngetting btree index support should be a higher-priority item.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Oct 1999 11:49:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type "
},
{
"msg_contents": "\nSounds great to me...\n\nOn Sat, 9 Oct 1999, Bruce Momjian wrote:\n\n> > 4. Now that I've got the functionality, can somebody give me a rough\n> > roadmap to what I need to change to turn this into a proper postgres\n> > type? As far as I can see I need to assign oid's to it in pg_type.h and\n> > I'll have to have a look at the parser to get it to recognise the types.\n> > It would be a big help though if somebody could tell me what else needs\n> > to change.\n> \n> I can integrate the type for you into the include/catalog files if\n> everyone agrees they want it as a standard type and not an contrib type.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 12 Oct 1999 01:02:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> I can integrate the type for you into the include/catalog files if\n> everyone agrees they want it as a standard type and not an contrib type.\n\nHi,\n\n\tAttached are the C-routines that implement a BIT and BIT VARYING type.\nI know Bruce said he would integrate them, but he is writing a book at\nthe moment as well, so if somebody can explain to me how to go about\nintegrating it, or would like to have a go, go ahead. \n\nIf any functions are missing, let me know and I will add them. This\nshould implement concatenation and substr as defined in the SQL\nstandard, as well as comparison operators. I've also added all the\nnormal bit operators.\n\nI developed the C routines outside the postgres source tree, only using\npostgres.h and copying bits from ctype.h. I hope it will be fairly easy\nto integrate.\n\nAny comments welcome.\n\nAdriaan",
"msg_date": "Fri, 26 Nov 1999 09:12:09 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > I can integrate the type for you into the include/catalog files if\n> > everyone agrees they want it as a standard type and not an contrib type.\n> \n> Hi,\n> \n> \tAttached are the C-routines that implement a BIT and BIT VARYING type.\n> I know Bruce said he would integrate them, but he is writing a book at\n> the moment as well, so if somebody can explain to me how to go about\n> integrating it, or would like to have a go, go ahead. \n\nApplied. I am�embarrassed to say I had a copy from June still in my\nmailbox.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 17:35:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > >\n> > > I can integrate the type for you into the include/catalog files if\n> > > everyone agrees they want it as a standard type and not an contrib type.\n> >\n> > Hi,\n> >\n> > Attached are the C-routines that implement a BIT and BIT VARYING type.\n> > I know Bruce said he would integrate them, but he is writing a book at\n> > the moment as well, so if somebody can explain to me how to go about\n> > integrating it, or would like to have a go, go ahead.\n> \n> Applied. I am embarrassed to say I had a copy from June still in my\n> mailbox.\n\nDon't be: they've been ready for a while, but I had to recheck them.\nWhen BIT and BIT VARYING are properly integrated, do I need to do\nsomething about regression tests?\n\nAdriaan\n",
"msg_date": "Tue, 30 Nov 1999 08:23:05 +0200",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "> > Applied. I am embarrassed to say I had a copy from June still in my\n> > mailbox.\n> \n> Don't be: they've been ready for a while, but I had to recheck them.\n> When BIT and BIT VARYING are properly integrated, do I need to do\n> something about regression tests?\n\nYes, we will need them to be added to the regression tests. We can use\nyour test/ directory as a source for that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Nov 1999 01:33:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> When BIT and BIT VARYING are properly integrated, do I need to do\n> something about regression tests?\n\nPlease do contribute a regression test for them. We always need\nmore regression tests ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Nov 1999 01:40:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Update of bitmask type "
}
] |
[
{
"msg_contents": "I've seen several posts about this and contacted the developers\nat Yellow Dog Linux. Turns out they have the same problem and the\nonly way they have gotten PostgreSQL to work is by compiling with\na -O0 flag.\n\nMy guess is there is some problem in gcc specific to the PowerPC\nplatform. I can get PostgreSQL to work either with the native xlc\ncompiler + -O2 or gcc + -O0.\n\nSuggestions?\n",
"msg_date": "Wed, 16 Jun 1999 14:08:57 -0500",
"msg_from": "\"David R. Favor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres error - typeidTypeRelid (AIX, PPC and Alpha)"
},
{
"msg_contents": "> I've seen several posts about this and contacted the developers\n> at Yellow Dog Linux. Turns out they have the same problem and the\n> only way they have gotten PostgreSQL to work is by compiling with\n> a -O0 flag.\n> \n> My guess is there is some problem in gcc specific to the PowerPC\n> platform. I can get PostgreSQL to work either with the native xlc\n> compiler + -O2 or gcc + -O0.\n> \n> Suggestions?\n> \n\n\nYes, we have turned down optimization on PPC and Alpha until we can fix\nthis in 6.6.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 20:30:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres error - typeidTypeRelid (AIX, PPC and Alpha)"
}
] |
[
{
"msg_contents": "I've already tried to put the exec back. But then I hit a problem with \n\"MyProcPort\" which is not initialised in the backend and make the \nbackend crash. I've also found that \"MyCancelKey\" is set in postmaster. \nAre there any others ? \n\nRegarding the old code (6.3.2), there have been a lot of change in \nDoBackend/DoExec. I really need some expert advice on what to do.\n\n\n\tcyril\n\n\n>> \tHi alls\n>> \n>> \tI'm working on a port of postgres on BeOS (www.be.com). BeOS is not \n>> a real UNIX, but it provide a subset of the posix API. At this stage \n>> I've a working version ofit. But since 6.4.2, I've a lot of problems \n>> (dynamic loading doesn't work any more...) with the fact that \n>> postgresmain is call directly instead of the old exec method. BeOS \n>> really don't like to do a lot of thing after a fork and before an \nexec \n>> :=(. \n>> \tI would like to know how hard it would be to add the exec call. As \n>> I understand it, I have to get back all global variables and shared \n>> memory and perhaps doing something with sockets/file descriptors ? \nI've \n>> a ready solution for shared memory but I need some help regarding \nthe \n>> others points.\n>\n>You can put back the exec fairly easily. You just need to pass the\n>proper parameters, and change the fork to an exec. You can look at \nthe\n>older code that did the exec for an example, and #ifdef the exec() \nback\n>into the code.\n",
"msg_date": "Wed, 16 Jun 1999 22:22:04 CEST",
"msg_from": "\"Cyril VELTER\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] BeOS port"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> I've already tried to put the exec back. But then I hit a problem with \n> \"MyProcPort\" which is not initialised in the backend and make the \n> backend crash. I've also found that \"MyCancelKey\" is set in postmaster. \n> Are there any others ? \n> \n> Regarding the old code (6.3.2), there have been a lot of change in \n> DoBackend/DoExec. I really need some expert advice on what to do.\n> \n\nI recommend you get anonymous cvs access(see cvs faq on web site) do a\nlog to show changes to postgres.c and postmaster.c, and you will find\nthe exec was removed in one or two big patches. Then do a cvs diff and\nsee the changes made, and try and merge them into the current code with\nifdef's.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 18:30:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BeOS port"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Regarding the old code (6.3.2), there have been a lot of change in \n>> DoBackend/DoExec. I really need some expert advice on what to do.\n\n> I recommend you get anonymous cvs access(see cvs faq on web site) do a\n> log to show changes to postgres.c and postmaster.c, and you will find\n> the exec was removed in one or two big patches. Then do a cvs diff and\n> see the changes made, and try and merge them into the current code with\n> ifdef's.\n\nHe's right though: there have been subsequent changes that depend on\nnot doing an exec(). Offhand I only recall MyCancelKey --- that is set\nin the postmaster process just before fork(), and the backend simply\nassumes that it's got the right value.\n\nThe straightforward solution (invent another backend command line switch\nto pass the cancel key) would not be a very good idea, since that would\nexpose the cancel key to prying eyes.\n\nIf BeOS does not have the ability to support fork without exec, does it\nhave some other way of achieving the same result? Threads maybe?\n(But Postgres is hardly the only common daemon that uses fork without\nexec; sendmail comes to mind, for example. So it seems like the real\nanswer is to beat up the BeOS folks about fixing their inadequate Unix\nsupport...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 10:35:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BeOS port "
},
{
"msg_contents": "On Fri, Jun 18, 1999 at 10:35:18AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> >> Regarding the old code (6.3.2), there have been a lot of change in \n> >> DoBackend/DoExec. I really need some expert advice on what to do.\n> \n> > I recommend you get anonymous cvs access(see cvs faq on web site) do a\n> > log to show changes to postgres.c and postmaster.c, and you will find\n> > the exec was removed in one or two big patches. Then do a cvs diff and\n> > see the changes made, and try and merge them into the current code with\n> > ifdef's.\n> \n> He's right though: there have been subsequent changes that depend on\n> not doing an exec(). Offhand I only recall MyCancelKey --- that is set\n> in the postmaster process just before fork(), and the backend simply\n> assumes that it's got the right value.\n> \n> The straightforward solution (invent another backend command line switch\n> to pass the cancel key) would not be a very good idea, since that would\n> expose the cancel key to prying eyes.\n> \n> If BeOS does not have the ability to support fork without exec, does it\n> have some other way of achieving the same result? Threads maybe?\n> (But Postgres is hardly the only common daemon that uses fork without\n> exec; sendmail comes to mind, for example. So it seems like the real\n> answer is to beat up the BeOS folks about fixing their inadequate Unix\n> support...)\n\n\tI heard that! I work in Be's QA department. In fact, our bug\ndatabase got transferred to a Postgres/PHP/Apache system a few months\nago, running on Linux. Although I'm pretty much of the mind that BeOS\nisn't a server OS, it would be interesting to see BeOS running postgres\nas a server.\n\tIf you can tell me specifically what the problem is, I can pass\nit along to the Kernel team.\n\n",
"msg_date": "Sat, 19 Jun 1999 11:49:03 -0700",
"msg_from": "Adam Haberlach <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BeOS port"
}
] |
[
{
"msg_contents": "I have been looking at the planner's handling of subplans, and I see\nsomething that I think is wrong, but I'm not quite certain. In\n_make_subplan() in backend/optimizer/plan/subselect.c, there is the\ncode\n\n /* make parParam list */\n foreach(lst, plan->extParam)\n {\n Var *var = nth(lfirsti(lst), PlannerParamVar);\n\n if (var->varlevelsup == PlannerQueryLevel)\n node->parParam = lappendi(node->parParam, lfirsti(lst));\n }\n\nIt looks to me like this code is supposed to find parameters that\nreference the immediate parent plan level, as opposed to higher levels.\nSo, shouldn't it be looking for varlevelsup == 1, not PlannerQueryLevel?\n\nFor a first-level subplan, PlannerQueryLevel will be 1 at the time\nthis code runs, so the result is the same anyway. But I think it\ndoes the wrong thing for more deeply nested subplans. Am I right?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jun 1999 19:02:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Apparent bug in _make_subplan"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> /* make parParam list */\n> foreach(lst, plan->extParam)\n> {\n> Var *var = nth(lfirsti(lst), PlannerParamVar);\n> \n> if (var->varlevelsup == PlannerQueryLevel)\n> node->parParam = lappendi(node->parParam, lfirsti(lst));\n> }\n> \n> It looks to me like this code is supposed to find parameters that\n> reference the immediate parent plan level, as opposed to higher levels.\n> So, shouldn't it be looking for varlevelsup == 1, not PlannerQueryLevel?\n> \n> For a first-level subplan, PlannerQueryLevel will be 1 at the time\n> this code runs, so the result is the same anyway. But I think it\n\nPlannerQueryLevel will be 0 here - subselect.c:140\n\n /* and now we are parent again */\n PlannerInitPlan = saved_ip;\n PlannerQueryLevel--;\n\n> does the wrong thing for more deeply nested subplans. Am I right?\n\nI'm not sure. Seems that I made assumption here that \nvarlevelsup is _absolute_ level number and seems that\n_replace_var() and _new_param() replace parser' varlevelsup\nwith absolute level value.\n\nVadim\n",
"msg_date": "Thu, 17 Jun 1999 09:49:50 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Apparent bug in _make_subplan"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> For a first-level subplan, PlannerQueryLevel will be 1 at the time\n>> this code runs, so the result is the same anyway. But I think it\n\n> PlannerQueryLevel will be 0 here - subselect.c:140\n\nNo, it's never 0. It starts out 1 in planner(), and _make_subplan\nincrements it at line 116 before recursing, then decrements again at\nline 142. So it's at least one when we arrive at the parParam code.\n\n> I'm not sure. Seems that I made assumption here that \n> varlevelsup is _absolute_ level number and seems that\n> _replace_var() and _new_param() replace parser' varlevelsup\n> with absolute level value.\n\nAfter looking through all the references to varlevelsup, it's clear\nthat all pieces of the system *except* subselect.c treat varlevelsup\nas a relative level number, so-many-levels-out-from-current-subplan.\nsubselect.c has a couple of places that think nonzero varlevelsup\nis an absolute level number, with 1 as the top plan. This is certainly\na source of bugs --- it happens to work for two-level plans, but will\nfail for anything more deeply nested. I will work on fixing subselect.c\nto bring it in line with the rest of the world...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 10:43:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan "
},
{
"msg_contents": "> After looking through all the references to varlevelsup, it's clear\n> that all pieces of the system *except* subselect.c treat varlevelsup\n> as a relative level number, so-many-levels-out-from-current-subplan.\n> subselect.c has a couple of places that think nonzero varlevelsup\n> is an absolute level number, with 1 as the top plan. This is certainly\n> a source of bugs --- it happens to work for two-level plans, but will\n> fail for anything more deeply nested. I will work on fixing subselect.c\n> to bring it in line with the rest of the world...\n\nvarlevelsup was always intended to be relative.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 10:56:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > I'm not sure. Seems that I made assumption here that\n> > varlevelsup is _absolute_ level number and seems that\n> > _replace_var() and _new_param() replace parser' varlevelsup\n> > with absolute level value.\n> \n> After looking through all the references to varlevelsup, it's clear\n> that all pieces of the system *except* subselect.c treat varlevelsup\n> as a relative level number, so-many-levels-out-from-current-subplan.\n> subselect.c has a couple of places that think nonzero varlevelsup\n> is an absolute level number, with 1 as the top plan. This is certainly\n> a source of bugs --- it happens to work for two-level plans, but will\n> fail for anything more deeply nested. I will work on fixing subselect.c\n> to bring it in line with the rest of the world...\n\nsubselect.c uses varlevelsup as absolute level number only\nfor correlation vars <--> params mapping, so why should it be\nsource of bugs? SS_replace_correlation_vars replaces all\ncorrelation vars with parameters. Vars with absolute varlevelsup\nare in PlannerParamVar only. To identify correlation vars and\nto know is parameter already assigned to a var we obviously\nneed in absolute level number.\n\nVadim\n",
"msg_date": "Fri, 18 Jun 1999 12:10:57 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan"
},
{
"msg_contents": "> subselect.c uses varlevelsup as absolute level number only\n> for correlation vars <--> params mapping, so why should it be\n> source of bugs? SS_replace_correlation_vars replaces all\n> correlation vars with parameters. Vars with absolute varlevelsup\n> are in PlannerParamVar only. To identify correlation vars and\n> to know is parameter already assigned to a var we obviously\n> need in absolute level number.\n\nBut the varlevelsup I pass in from the parser are relative to the\ncurrent level, not absolute. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 00:43:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > subselect.c uses varlevelsup as absolute level number only\n> > for correlation vars <--> params mapping, so why should it be\n> > source of bugs? SS_replace_correlation_vars replaces all\n> > correlation vars with parameters. Vars with absolute varlevelsup\n> > are in PlannerParamVar only. To identify correlation vars and\n> > to know is parameter already assigned to a var we obviously\n> > need in absolute level number.\n> \n> But the varlevelsup I pass in from the parser are relative to the\n> current level, not absolute.\n\nsubselect.c takes it into account, computes absolute numbers\nand stores them in PlannerParamVar only...\n\nVadim\n",
"msg_date": "Fri, 18 Jun 1999 14:28:30 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> But the varlevelsup I pass in from the parser are relative to the\n>> current level, not absolute.\n\n> subselect.c takes it into account, computes absolute numbers\n> and stores them in PlannerParamVar only...\n\nRight, I eventually figured that out, and I see that it's probably the\nbest way. I have added the following documentation to subselect.c:\n\n/*--------------------\n * PlannerParamVar is a list of Var nodes, wherein the n'th entry\n * (n counts from 0) corresponds to Param->paramid = n. The Var nodes\n * are ordinary except for one thing: their varlevelsup field does NOT\n * have the usual interpretation of \"subplan levels out from current\".\n * Instead, it contains the absolute plan level, with the outermost\n * plan being level 1 and nested plans having higher level numbers.\n * This nonstandardness is useful because we don't have to run around\n * and update the list elements when we enter or exit a subplan\n * recursion level. But we must pay attention not to confuse this\n * meaning with the normal meaning of varlevelsup.\n *--------------------\n */\n\nalong with other changes that I will commit once I get subselects in\nHAVING working right ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 10:19:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > subselect.c uses varlevelsup as absolute level number only\n> > > for correlation vars <--> params mapping, so why should it be\n> > > source of bugs? SS_replace_correlation_vars replaces all\n> > > correlation vars with parameters. Vars with absolute varlevelsup\n> > > are in PlannerParamVar only. To identify correlation vars and\n> > > to know is parameter already assigned to a var we obviously\n> > > need in absolute level number.\n> > \n> > But the varlevelsup I pass in from the parser are relative to the\n> > current level, not absolute.\n> \n> subselect.c takes it into account, computes absolute numbers\n> and stores them in PlannerParamVar only...\n\nOh.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 12:47:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Apparent bug in _make_subplan"
}
] |
[
{
"msg_contents": "It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X do\ncompile and run on SCO UnixWare 7, the current version. \n\nI have not been able to compile PostgreSQL 6.5 (or any other version) on \nUnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\nand I can't find the alloca.h header on my UnixWare 2 system.\n\nSCO has released a newer compiler, often called the Universal Development\nKit (UDK) which will run on UnixWare 7, UnixWare 2, and OpenServer 5, and\nwhich will produce a single binary for all three platforms. PostgreSQL 6.5\nwill compile on both UnixWare 7 and OpenServer 5 using SCO's UDK compiler,\nso I assume it would compile on UnixWare 2 using it as well. However,\nI don't have a UnixWare 2 system with the UDK installed to test this.\n\nI hope this is helpful.\n\nAndrew Merrill\nThe Computer Classroom, Inc., a SCO Authorized Education Center\n\n> Does anybody use postgres on this animal?\n>\n> UNIX_SV its-sp 4.2MP 2.1.2 i386 x86at SCO UNIX_SVR4\n> UX:cc: INFO: Optimizing C Compilation System (CCS) 3.0 09/26/96 (u211mr1)\n> Postgres 6.5 current CVS\n",
"msg_date": "Wed, 16 Jun 1999 23:36:00 -0500 (CDT)",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare"
},
{
"msg_contents": "\nI don't know about this specific problem, but GNU has an alloca\nimplementation that is portable enough to run on any platform (through\nclever trickery). So that should be solvable.\n\nAndrew Merrill wrote:\n> \n> It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X do\n> compile and run on SCO UnixWare 7, the current version.\n> \n> I have not been able to compile PostgreSQL 6.5 (or any other version) on\n> UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> and I can't find the alloca.h header on my UnixWare 2 system.\n> \n> SCO has released a newer compiler, often called the Universal Development\n> Kit (UDK) which will run on UnixWare 7, UnixWare 2, and OpenServer 5, and\n> which will produce a single binary for all three platforms. PostgreSQL 6.5\n> will compile on both UnixWare 7 and OpenServer 5 using SCO's UDK compiler,\n> so I assume it would compile on UnixWare 2 using it as well. However,\n> I don't have a UnixWare 2 system with the UDK installed to test this.\n> \n> I hope this is helpful.\n> \n> Andrew Merrill\n> The Computer Classroom, Inc., a SCO Authorized Education Center\n> \n> > Does anybody use postgres on this animal?\n> >\n> > UNIX_SV its-sp 4.2MP 2.1.2 i386 x86at SCO UNIX_SVR4\n> > UX:cc: INFO: Optimizing C Compilation System (CCS) 3.0 09/26/96 (u211mr1)\n> > Postgres 6.5 current CVS\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n",
"msg_date": "Thu, 17 Jun 1999 15:56:17 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "On 17-Jun-99 Andrew Merrill wrote:\n> It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X\n> do\n> compile and run on SCO UnixWare 7, the current version. \n> \n> I have not been able to compile PostgreSQL 6.5 (or any other version) on \n> UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> and I can't find the alloca.h header on my UnixWare 2 system.\n> \n> SCO has released a newer compiler, often called the Universal Development\n> Kit (UDK) which will run on UnixWare 7, UnixWare 2, and OpenServer 5, and\n> which will produce a single binary for all three platforms. PostgreSQL 6.5\n> will compile on both UnixWare 7 and OpenServer 5 using SCO's UDK compiler,\n> so I assume it would compile on UnixWare 2 using it as well. However,\n> I don't have a UnixWare 2 system with the UDK installed to test this.\n\nThank you!\n \nI found another compiler under /udk/usr/ccs/bin, and try to use it,\nbut I have lots of configure problems yet.\n\nPS:\n I'm completly new in UNIXWARE, sorry for some kind of stupidity in my\nquestions ;-))\n \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 17 Jun 1999 11:13:27 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Dmitry Samersoff wrote:\n\n> On 17-Jun-99 Andrew Merrill wrote:\n> > compile and run on SCO UnixWare 7, the current version. \n\n> > I have not been able to compile PostgreSQL 6.5 (or any other version) on \n> > UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> > and I can't find the alloca.h header on my UnixWare 2 system.\n\nOuch! Wrong! I am running UnixWare 2.1.2 and PostgreSQL 6.3.2\nwith good effect. In my shop we are saddled with a legacy SCO box that\nI will not be sinking any additional monies into. (We don't plan to \nbuy any other SCO products having already purchased enough to draw\na conclusion or two. ;-)\n\nAnyway, rhetoric aside, here is my log on how I compiled using the\nstock compiler. Many thanks to Bruce Momjian who walked me through this.\n\nAndrew, you do have alloca.h on your UW 2.1.2. It's in /usr/ucblib.\nYou needed to tell the linker where to find it (see below).\n\nKak dela Dmitri, and good luck with this. Remember Bruce M is a valuable\nresource and I will also try to help you as much as I am able before you\nfeel a need to throw money at SCO.\n\nDobri den (forgive fonetic Rooski! No cyrillic char set...),\nTom\n\n/**** PostgreSQL Installation ***/\n\n1) # useradd postgres\n2) # mkdir /usr/local/pgsql\n3) # mkdir /usr/src/pgsql\n4) # chown postgres /usr/local/pgsql\n5) # chown postgres /usr/src/pgsql\n6) # chgrp other /usr/local/pgsql\n7) # chgrp other /usr/src/pgsql\n8) # /usr/local/bin/gunzip /usr/spool/uucppublic/postgres-6.3.2.tar.gz\n9) # su - postgres \n10) $ cd /usr/src/pgsql\n11) $ tar xvf /usr/spool/uucppublic/postgres-6.3.2.tar\n12) $ cd ./post*\n13) $ cd ./src\n14) $ cp configure configure.orig\n15) $ vi configure... /TEMPLATE\n change TEMPLATE=template/`uname -s | tr A-Z a-z`\n to TEMPLATE=template/univel\n16) $ gmake all >&make.log &\n flex barfs: unable to locate /home/local/lib/flex.skel\n FIX: download flex-2.3pl7.tar.Z\n # /usr/local/bin/gunzip /var/spool/uucppublic/flex-2.3pl7.pkg.tar.Z\n # tar xvf flex-2.3pl7.pkg.tar\n # pkgadd -d `pwd`\n # ln -sf /opt/bin/flex /usr/local/bin/flex\n17) $ gmake all >&make.log &\n bison barfs...\n FIX: download bison-1.14.pkg.tar.Z\n # /usr/local/bin/gunzip /var/spool/uucppublic/bison-1.14.pkg.tar.Z\n # tar xvf bison-1.14.pkg.tar\n # pkgadd -d `pwd`\n # ln -sf /opt/bin/bison /usr/local/bin/bison\n18) After a multitude of warnings, gmake barfs with:\n Undefined symbol alloca in file bootstrap/SUBSYS.o\n Searched libraries: # nm /usr/lib/*.so* | grep alloca\n # nm /usr/lib/*.a* | grep alloca\n # nm /usr/ccs/lib/*.so* | grep alloca\n # nm /usr/ccs/lib/*.a* | grep alloca\n # nm /usr/ucblib/*.so* | grep alloca\n # nm /usr/ucblib/*.a* | grep alloca\n FIX: tail make.log > fixer...edit to add calls to Berkeley libraries\n ( -L/usr/ucblib -lucb )\n\n cd /usr/src/pgsql/postgresql-6.3.2/src/backend\n cc -o postgres access/SUBSYS.o bootstrap/SUBSYS.o \n catalog/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o \n lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o \n optimizer/SUBSYS.o parser/SUBSYS.o port/SUBSYS.o \n postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUB\n SYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o \n ../utils/version.o -L/usr/ucblib -lucb -lgen -lcrypt -lld \n -lnsl -lsocket -ldl -lm -ltermcap -lcurses -lc89 -lc89 \n -Wl,-Bexport \n19) After compiling the backend, gmake all barfs when compiling ecpg:\n Undefined\t\t\tfirst referenced\n symbol \t\t\t in file\n yyout y.tab.o\n yylex y.tab.o\n yyin ecpg.o\n yytext y.tab.o\n alloca y.tab.o\n yylineno y.tab.o\n lex_init ecpg.o\n yyleng y.tab.o\n UX:ld: ERROR: ecpg: fatal error: Symbol referencing errors. \n No output written to ecpg\n gmake[3]: *** [ecpg] Error 1\n gmake[2]: *** [all] Error 2\n gmake[1]: *** [all] Error 2\n gmake: *** [all] Error 2\n Searching all the library files (/usr/lib, /usr/ccs/lib and /usr/ucblib)\n with # nm *.so* | grep $file (yy---, alloca and lex_init)\n and # nm *.a* | grep $file (yy---, alloca and lex_init)\n revealed that lex_init does not exist on this box.\n FIX: comment ecpg out of the ./interfaces/Makefile\n20) postgres user was unable to run initdb...\n FIX: edit /etc/profile:\n LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/pgsql/lib\n export LD_LIBRARY_PATH\n then: $ initdb --username=postgres\n succeeded.\n21) Attempting to fire up the server produced IpcMemoryCreate errors.\n FIX: enlarge shared memory avail in the kernel...\n # /etc/conf/bin/idtune SHMMAX 778240\n # /etc/conf/bin/idbuild -B\n22) Installed DBI::DBD in usual manner but perl is too old (5.003).\n Downloading 5.004_04 from www.sco.com/skunkware/uw2/interp/perl/\n23) Installed Perl 5.004_04 and reinstalled DBI::DBD\n Error msg when trying to run a dbi script: \n dynamic linker: /usr/bin/perl: symbol not found: strncasecmp\n FIX: edit /usr/src/perl5/DBD*/Pg.xs, replacing strncasecmp with\n strncmp --- rerun `make' & `make install'. \n\n if (!strncmp(statement, \"begin\", 5) ||\n !strncmp(statement, \"end\", 4) ||\n !strncmp(statement, \"commit\", 6) ||\n !strncmp(statement, \"abort\", 5) ||\n !strncmp(statement, \"rollback\", 8) ) {\n if (!strncmp(statement, \"begin\", 5) ||\n !strncmp(statement, \"end\", 4) ||\n !strncmp(statement, \"commit\", 6) ||\n !strncmp(statement, \"abort\", 5) ||\n !strncmp(statement, \"rollback\", 8) ) {\n\nEOF\n\n\n> I found another compiler under /udk/usr/ccs/bin, and try to use it,\n> but I have lots of configure problems yet.\n> \n> PS:\n> I'm completly new in UNIXWARE, sorry for some kind of stupidity in my\n> questions ;-))\n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n\n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n\n",
"msg_date": "Thu, 17 Jun 1999 07:23:34 -0400 (EDT)",
"msg_from": "Thomas Good <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "> It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X do\n> compile and run on SCO UnixWare 7, the current version. \n> \n> I have not been able to compile PostgreSQL 6.5 (or any other version) on \n> UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> and I can't find the alloca.h header on my UnixWare 2 system.\n\nAlloca source should be available somewhere for your platform.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 09:32:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "Hi Thomas G. and Dmitry. Could you think about melding these\ninstructions onto doc/FAQ_SCO in the v6.5 distribution? It currently\nomits mention of UnixWare2.0, but istm that this cookbook could be\nadded to the end with great effect. Perhaps Andrew could do the actual\ngraft...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Jun 1999 13:48:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "Andrew Merrill <[email protected]> writes:\n> It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X do\n> compile and run on SCO UnixWare 7, the current version. \n\n> I have not been able to compile PostgreSQL 6.5 (or any other version) on \n> UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> and I can't find the alloca.h header on my UnixWare 2 system.\n\nHere is another line of attack besides the ones already given. I've\nbeen around on this problem with HPUX 9, and find that the only places\nin PostgreSQL that use alloca are the parser files, and those do so only\nif they were generated with GNU bison. But the prebuilt copies of\ngram.c and preproc.c are made with bison. So, one solution is to\nrebuild the parser files with your local yacc --- which, presumably,\nwill not generate code that relies on alloca.\n\nVendor yaccs tend to spit up on the Postgres grammar files because\nthey're so large, so this may be easier said than done. See\ndoc/FAQ_HPUX for a set of yacc switches that worked for me.\n\nOf course, the other approach is to install and use gcc, which supports\nalloca as inline code on all platforms...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 10:33:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare "
},
{
"msg_contents": "\nOn 17-Jun-99 Thomas Good wrote:\n> On Thu, 17 Jun 1999, Dmitry Samersoff wrote:\n> \n>> On 17-Jun-99 Andrew Merrill wrote:\n>> > compile and run on SCO UnixWare 7, the current version. \n> \n>> > I have not been able to compile PostgreSQL 6.5 (or any other version) on \n>> > UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n>> > and I can't find the alloca.h header on my UnixWare 2 system.\n> \n> Ouch! Wrong! I am running UnixWare 2.1.2 and PostgreSQL 6.3.2\n> with good effect. In my shop we are saddled with a legacy SCO box that\n> I will not be sinking any additional monies into. (We don't plan to \n> buy any other SCO products having already purchased enough to draw\n> a conclusion or two. ;-)\n> \n> Anyway, rhetoric aside, here is my log on how I compiled using the\n> stock compiler. Many thanks to Bruce Momjian who walked me through this.\n\nI keep it simple (for 6.5 release)!\nI patch configure.in\n\n sysv4.2*)\n case \"$host_vendor\" in\n univel) os=univel need_tas=no ;;\n+ pc) os=uw2 need_tas=no ;;\n *) os=unknown need_tas=no ;;\n esac ;;\n\nand setup apropriate template and port/* files\n\nTemplate really could be set to \"unixware\", but I have strange directory \n/udk/usr/ .... where all libraries and includes reside.\n \ngmake ofcause should be installed, but I use gmake for my-own \ndevelopment so it don't case trouble. \n\nI receive strange lex error\nex scan.l\n\"scan.l\":line 55: Error: Invalid request %x IN_STRING IN_COMMENT\ngmake: *** [pl_scan.c] Error 1\n\nscan.l:55\n%x IN_STRING IN_COMMENT\n\nWhat it means?\n\nI'm going to install flex. \n\n> Kak dela Dmitri, and good luck with this. Remember Bruce M is a valuable\n> resource and I will also try to help you as much as I am able before you\n> feel a need to throw money at SCO.\n\nThanks alot for good wishes!\n We don't plan to spend many for SCO, but old UW is part\nof Lucent Voice-over-IP solution and I need to \ndo some magic with UW to make it managable by our support dept. \n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 17 Jun 1999 18:47:26 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "Andrew Merrill wrote:\n> It looks like you are using SCO UnixWare 2.1.2. PostgreSQL 6.4.X and 6.5.X d\n> o\n> compile and run on SCO UnixWare 7, the current version. \n> \n> I have not been able to compile PostgreSQL 6.5 (or any other version) on \n> UnixWare 2. The main problem seems to be that PostgreSQL uses alloca,\n> and I can't find the alloca.h header on my UnixWare 2 system.\n> \n> SCO has released a newer compiler, often called the Universal Development\n> Kit (UDK) which will run on UnixWare 7, UnixWare 2, and OpenServer 5, and\n> which will produce a single binary for all three platforms. PostgreSQL 6.5\n> will compile on both UnixWare 7 and OpenServer 5 using SCO's UDK compiler,\n> so I assume it would compile on UnixWare 2 using it as well. However,\n> I don't have a UnixWare 2 system with the UDK installed to test this.\n> \n> I hope this is helpful.\n> \n> Andrew Merrill\n> The Computer Classroom, Inc., a SCO Authorized Education Center\n> \n> > Does anybody use postgres on this animal?\n> >\n> > UNIX_SV its-sp 4.2MP 2.1.2 i386 x86at SCO UNIX_SVR4\n> > UX:cc: INFO: Optimizing C Compilation System (CCS) 3.0 09/26/96 (u211mr1\n> )\n> > Postgres 6.5 current CVS\n> \n\nUnder UnixWare 2.x, alloca is in the UCB library (libucb.a ?). The easist way \nto link in alloca is to extract it from libucb.a and either link in the \nalloca.o file or put it into the libgen.a library and the link in via -lgen. \nWhy now just link in lubucb.a you ask? The are routines in libucb.a that you \ndo not want to get linked into the object in place of the regular routines \nwith the same name. I have ran my UnixWare 2.x system (when I had it) with \nalloca in libgen.a with out any problems.\n\nThis was the setup I used to compile Postgres 6.3.x when I had UnixWare 2.x. \n[Hmmm.... It looks like I should put together a 486 box to run Unixware 2.x \non so that I can test new versions of PostgreSQL on it].\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Thu, 17 Jun 1999 22:55:32 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare "
},
{
"msg_contents": "On Thu, 17 Jun 1999, Billy G. Allie wrote:\n\nBilly,\n\nI have such a box (running 2.1.2 and pg 6.3.2) and linked no problem (see\nearlier post) by telling the linker to use /usr/ucblib.\n\nThomas Lockhart would like me to document this along with Dmitri S and\nAndrew...but I am underqualifed (generally ;-)\n\nCare to assist?\n\nCheers,\nTom Good\n\n> Under UnixWare 2.x, alloca is in the UCB library (libucb.a ?). The easist way \n> to link in alloca is to extract it from libucb.a and either link in the \n> alloca.o file or put it into the libgen.a library and the link in via -lgen. \n> Why now just link in lubucb.a you ask? The are routines in libucb.a that you \n> do not want to get linked into the object in place of the regular routines \n> with the same name. I have ran my UnixWare 2.x system (when I had it) with \n> alloca in libgen.a with out any problems.\n> \n> This was the setup I used to compile Postgres 6.3.x when I had UnixWare 2.x. \n> [Hmmm.... It looks like I should put together a 486 box to run Unixware 2.x \n> on so that I can test new versions of PostgreSQL on it].\n> \n> -- \n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n> \n> \n> \n\n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n",
"msg_date": "Fri, 18 Jun 1999 07:04:55 -0400 (EDT)",
"msg_from": "Thomas Good <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare "
},
{
"msg_contents": "\nOn 18-Jun-99 Thomas Good wrote:\n> On Thu, 17 Jun 1999, Billy G. Allie wrote:\n> \n> Billy,\n> \n> I have such a box (running 2.1.2 and pg 6.3.2) and linked no problem (see\n> earlier post) by telling the linker to use /usr/ucblib.\n\nI clearly build 6.5 on my UW using compiler located in /udk\nwith one fiew problem in plpgsql:\ncompilation of pl_scan.c filed due undeclared\nK_ALIAS ...\n\nI move line\n#include \"pl_scan.c\" \nin pl_gram.c after constant declaration, but I don't now \nhow it can be solved permanently in gram.y\n\nI can't connect to postgres using UDS, \nprobably, because its on disk name has additional simbol ^F\n\nroot@its-sp:~/postgresql-6.5/src/pl/plpgsql/src>ls -l /tmp/.s* \np-w--w--w- 1 postgres 0 Jun 17 22:39 /tmp/.s.PGSQL.5432^F\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 21 Jun 1999 13:20:21 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n> \n> On 18-Jun-99 Thomas Good wrote:\n> > On Thu, 17 Jun 1999, Billy G. Allie wrote:\n> > \n> > Billy,\n> > \n> > I have such a box (running 2.1.2 and pg 6.3.2) and linked no problem (see\n> > earlier post) by telling the linker to use /usr/ucblib.\n> \n> I clearly build 6.5 on my UW using compiler located in /udk\n> with one fiew problem in plpgsql:\n> compilation of pl_scan.c filed due undeclared\n> K_ALIAS ...\n> \n\t[...]\n> \n> I can't connect to postgres using UDS, \n> probably, because its on disk name has additional simbol ^F\n> \n> root@its-sp:~/postgresql-6.5/src/pl/plpgsql/src>ls -l /tmp/.s* \n> p-w--w--w- 1 postgres 0 Jun 17 22:39 /tmp/.s.PGSQL.5432^F\n\nI had a similar problem on UnixWare 7.x, except mine had a control-D (most of \nthe time). The problem went away after I applied the current patches to the \nsystem (surprise, surprise).\n\nI would suggest bringing your system up to 2.1.3 with all the patches applied. \n It is a problem with Unixware. The other solution is to only use TCP/IP to \nconnect to the database, even on the local machine. You can do this with the \n-i option to postmaster.\n\nI hope this helps.\n\n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Mon, 21 Jun 1999 23:57:00 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UnixWare "
}
] |
[
{
"msg_contents": "Hello.\n\nI am going to implement large object manager \nfor certain samll education purposed DBMS.\n\nSo I've read the paper\n \" Large Object Suport in POSTGRES\" \n - Michael Stonbraker, Michael Olson\nBut it contains nothing about the detail. \nThat is, how's it implemented, what's the problem, \nespecially how does it recover when failure occurred and \n handle objects between memory and disk or \n client and server.\n\nCould you explain the detail, or\n recommend some resources I should read ?\n\nSorry for unexpected mail and poor english.\n\n\n",
"msg_date": "Thu, 17 Jun 1999 15:30:04 +0900 (KST)",
"msg_from": "Young-Woo Cho <[email protected]>",
"msg_from_op": true,
"msg_subject": "Could you help me ?"
}
] |
[
{
"msg_contents": "Actually, I think a lot of the cases where rollback to savepoint\nwould be done implicitly could be avoided by adding a fourth\nbehavior of elog.\n\nThis elog, let's e.g. call it elog(WARN,...) would actually behave\nlike an elog(NOTICE,..) in the backend, but would return ERROR\nto the client. I think at least all elogs that happen in the parser\ncould be handled like this, and a lot of the others.\nOf course the client would need an error code, but that is your \n2. item anyway :-)\n\nThe following example is IMHO not necessary, \nwith or without savepoints:\n\nregression=> begin work;\nBEGIN\nregression=> insert into t2 values (1, 'one');\nINSERT 151498 1\nregression=> blabla;\nERROR: parser: parse error at or near \"blabla\"\nregression=> commit work;\t-- actually this is currently a bug,\n \t\t\t\t\t-- it must ERROR, since only\nrollback work\nEND\t\t\t\t\t-- is allowed in txn abort state\nregression=> select * from t2;\na|b\n-+-\n(0 rows)\n\nAndreas\n",
"msg_date": "Thu, 17 Jun 1999 08:55:27 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Savepoints..."
}
] |
[
{
"msg_contents": "Someone posted this link to a Case tool:\nhttp://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n\nUnfortunately it's in Russian, which I know nothing about, and it\ndoesn't seem to have been worked upon since 1997.\n\nI don't know the strength of this tool, as I can't understand the\ndescription, but can this be the best OSS-contender with no work being\ndone for two years?\n\n",
"msg_date": "Thu, 17 Jun 1999 09:29:07 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Case tool"
},
{
"msg_contents": "I have contacted with author and it says he just don't have time\nand he'd like to pass development to somebody. As I understand most\nthing works ok but needs to be synchronized with new version of\nTCL/TK, postgres and python. I didn't see any free CASE TOOL like\nVinsent which supports postgres.\n\n\tRegards,\n\n\t\tOleg\n\nOn Thu, 17 Jun 1999, Kaare Rasmussen wrote:\n\n> Date: Thu, 17 Jun 1999 09:29:07 +0200\n> From: Kaare Rasmussen <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Case tool\n> \n> Someone posted this link to a Case tool:\n> http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n> \n> Unfortunately it's in Russian, which I know nothing about, and it\n> doesn't seem to have been worked upon since 1997.\n> \n> I don't know the strength of this tool, as I can't understand the\n> description, but can this be the best OSS-contender with no work being\n> done for two years?\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 17 Jun 1999 11:43:55 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Case tool"
},
{
"msg_contents": "From: Kaare Rasmussen <[email protected]>\n> Someone posted this link to a Case tool:\n> http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n> \n> Unfortunately it's in Russian, which I know nothing about, and it\n> doesn't seem to have been worked upon since 1997.\n> \n> I don't know the strength of this tool, as I can't understand the\n> description, but can this be the best OSS-contender with no work being\n> done for two years?\n\n******* Translation begin ***********\nThe current version allows for:\n 1 draw tables.\n including\n - restoring previous state\n - copying to the local application buffer\n - pasting from the buffer\n - editing the table field types\n - and also some more stuff\n\n 2 link fields (constraints)\n - FOREIGN KEY |--> PRIMARY KEY\n\n 3 generate SQL file from the given structure\n - for Postgres95 (w/out constrains)\n - for Informix ( with constraints)\n\n 4 there is a request broker using protocol independent of DB\n 1 for Informix (tested with HP-UX and Informix 7.20)\n 2 for PostgreSQL \n 5 generates a visual DB repsentation from a DB. \n (except references (takes time))\n\nPlans:\n 3 create new DBs through the broker and fill them with tables\n 4 alter existing tables through broker\n 5 generate GCI script according to the struture\n 6 ctreate additional SQL generators for other DBMS\n 7 fix the interface\n\nsend all requests & complaints to: mailto:[email protected]\n/nop abuse please (literally \"please don't spit\")/\n******** translation end ***********\n\nPlease no questions to me and no \"curtesy copies\".\n\n",
"msg_date": "Thu, 17 Jun 1999 11:51:13 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Case tool"
},
{
"msg_contents": "On Thu, Jun 17, 1999 at 09:29:07AM +0200, Kaare Rasmussen wrote:\n> Someone posted this link to a Case tool:\n> http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n> \n> Unfortunately it's in Russian, which I know nothing about, and it\n> doesn't seem to have been worked upon since 1997.\n> \n> I don't know the strength of this tool, as I can't understand the\n> description, but can this be the best OSS-contender with no work being\n> done for two years?\n\n\nI've spent time looking for such tools in the Free Software community\non and off over the last couple of years. I agree that there's a gap in\nwhat's available.\n\nSometimes, I think the problem is that, although there's a number of\nacademic projects working on the general problem of graph layout, they all\nseem to want to commercialize their code, rather than contribute it to\nthe community. A consequence, I think, of how expensive good commercial\nCASE tools are: makes it easier to imagine turning your pet project into\nsome money, I suppose.\n\nPerhaps a deeper reason seems to be the suspicion among hacker types that\nCASE isn't all it's cracked up to be. Heck, there's no OSS graphical\nIDE for software development, but that hasn't stopped the development\nof some pretty large projects (the Linux kernel and PostgreSQL as two\nexamples.) A public CVS repository, text editor of your choice, and\ncommand line compilation tools (e.g. gcc driven by make) seem to be all\nthe developers need to get the work done. CASE diagraming tools seem to\nbe more critical for generating pretty pictures for management. That's\nbeen true for me, so I added simple schema diagramming to the pgaccess\ntool. This allows me to document the relationships in an existing DB,\nrather than the other way around. (Hmm, that reminds me, did I send that\nlast version off to Constantin? I better check) I will admit that the\ndiagramming has also eased collaborating with co-developers at remote\nsites. So, I see it as filling part of the documentation problem, rather\nthan the design problem.\n\nNow, it seems that your experience has been that DB CASE is critical for\nlarge DB projects. Perhaps these types of projects scale differently the\ncode development projects I mentioned above. If so, the free software\ncommunity hasn't had the chance to fill that niche yet: heck, it's only\nbeen the last 12 to 18 months that PostgreSQL has matured enough in the\neyes of many to tackle really big DB implementations.\n\nI haven't had the opportunity to use commercial CASE tools (or commercial\nDB,s for that matter!) What benefits do you see in using them? It seems\nyou're incredulous that anyone could maintain a DB with more than 60\ntables without them. Why? It seems to me that maintainability of any\ncomplex system comes from a well factored underlying design, rather\nthan from complex maintenance tools. I'd really like to hear your take\non this, but I'm pretty sure the HACKERS mailing list is the wrong one\nfor this discussion. Ah, I think the INTERFACES list looks about right.\n(I've posted there, and CCed you on this mail, since I'm not sure your\nsubscribed there)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 17 Jun 1999 11:14:55 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Case tool"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Ross J. Reedstrom wrote:\n\n> Date: Thu, 17 Jun 1999 11:14:55 -0500\n> From: \"Ross J. Reedstrom\" <[email protected]>\n> To: [email protected]\n> Cc: Kaare Rasmussen <[email protected]>\n> Subject: [INTERFACES] Re: [HACKERS] Case tool\n> \n> On Thu, Jun 17, 1999 at 09:29:07AM +0200, Kaare Rasmussen wrote:\n> > Someone posted this link to a Case tool:\n> > http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n> > \n> > Unfortunately it's in Russian, which I know nothing about, and it\n> > doesn't seem to have been worked upon since 1997.\n> > \n> > I don't know the strength of this tool, as I can't understand the\n> > description, but can this be the best OSS-contender with no work being\n> > done for two years?\n> \n> \n> I've spent time looking for such tools in the Free Software community\n> on and off over the last couple of years. I agree that there's a gap in\n> what's available.\n> \n> Sometimes, I think the problem is that, although there's a number of\n> academic projects working on the general problem of graph layout, they all\n> seem to want to commercialize their code, rather than contribute it to\n> the community. A consequence, I think, of how expensive good commercial\n> CASE tools are: makes it easier to imagine turning your pet project into\n> some money, I suppose.\n> \n> Perhaps a deeper reason seems to be the suspicion among hacker types that\n> CASE isn't all it's cracked up to be. Heck, there's no OSS graphical\n> IDE for software development, but that hasn't stopped the development\n> of some pretty large projects (the Linux kernel and PostgreSQL as two\n> examples.) A public CVS repository, text editor of your choice, and\n> command line compilation tools (e.g. gcc driven by make) seem to be all\n> the developers need to get the work done. CASE diagraming tools seem to\n> be more critical for generating pretty pictures for management. That's\n> been true for me, so I added simple schema diagramming to the pgaccess\n> tool. This allows me to document the relationships in an existing DB,\n> rather than the other way around. (Hmm, that reminds me, did I send that\n> last version off to Constantin? I better check) I will admit that the\n> diagramming has also eased collaborating with co-developers at remote\n> sites. So, I see it as filling part of the documentation problem, rather\n> than the design problem.\n\nYes, documentation is one reason I'm using CASE tool like ERwin, but\nreverse/forward engineering is also useful feature. It's very important\nif you're doing join project with many developers involved.\nI found that it is possible to configure Erwin to work with PostgreSQL\nand it really helps me. \n\nIt would be nice to see Vinsent integrated into pgaccess. But this is\na big project - python,tcl/tk programers require. As I said author of \nVinsent is looking for somebody continue his project.\n\n> \n> Now, it seems that your experience has been that DB CASE is critical for\n> large DB projects. Perhaps these types of projects scale differently the\n> code development projects I mentioned above. If so, the free software\n> community hasn't had the chance to fill that niche yet: heck, it's only\n> been the last 12 to 18 months that PostgreSQL has matured enough in the\n> eyes of many to tackle really big DB implementations.\n> \n> I haven't had the opportunity to use commercial CASE tools (or commercial\n> DB,s for that matter!) What benefits do you see in using them? It seems\n> you're incredulous that anyone could maintain a DB with more than 60\n> tables without them. Why? It seems to me that maintainability of any\n> complex system comes from a well factored underlying design, rather\n> than from complex maintenance tools. I'd really like to hear your take\n> on this, but I'm pretty sure the HACKERS mailing list is the wrong one\n> for this discussion. Ah, I think the INTERFACES list looks about right.\n> (I've posted there, and CCed you on this mail, since I'm not sure your\n> subscribed there)\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 17 Jun 1999 20:37:10 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] Case tool"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n\n> Sometimes, I think the problem is that, although there's a number of\n> academic projects working on the general problem of graph layout,\n> they all seem to want to commercialize their code, rather than\n> contribute it to the community. A consequence, I think, of how\n> expensive good commercial CASE tools are: makes it easier to imagine\n> turning your pet project into some money, I suppose.\n\nFor graph layout, there's vcg, which is GPL (but the algorithms\nthemselves are obfuscated, which sort of reduces its value).\n\nAT&T have published pretty detailed descriptions of the algorithms\nthey use in dot and whatever the other ones are called. You can\ndownload papers on this from their web site. Someone sufficiently\ninterested ought to be able to knock something sane up fairly quickly,\nI'd have thought. I think it would be a great thing to have---I'd be\nwilling to put some effort into it.\n\nI got email from Stephen North (one of the authors) in November saying\nhe was asking for permission to release graphviz as open source, which\nwould be amazingly cool---I could imagine graphical tools sweeping the\nLinux and *BSD worlds. Obviously, that hasn't happened. Maybe it\nwould be worth prodding him, just to see if there's any chance?\n",
"msg_date": "17 Jun 1999 20:19:26 +0100",
"msg_from": "Bruce Stephens <[email protected]>",
"msg_from_op": false,
"msg_subject": "Off topic-graph layout tools (was Re: [INTERFACES] Re: [HACKERS] Case\n\ttool)"
}
] |
[
{
"msg_contents": "Hi,\n\nseems something wrong with rsync mirroring.\nI use \nrsync -avz --delete hub.org::postgresql-www /d4/Web/mirrors/pgsql/\nand it does well but no images from /images directory are copied.\nSo my mirror http://www.sai.msu.su:8000/ looks corrupt\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 17 Jun 1999 11:39:30 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Oleg Bartunov wrote:\n\n> Hi,\n> \n> seems something wrong with rsync mirroring.\n> I use \n> rsync -avz --delete hub.org::postgresql-www /d4/Web/mirrors/pgsql/\n> and it does well but no images from /images directory are copied.\n> So my mirror http://www.sai.msu.su:8000/ looks corrupt\n\nAre you on the mirrors mailing list? We're trying to get all of the\nmirror admins on it to contact them easier. Anyway, I dropped your\nsite from automatic redirect while we figure out what happened. There\nare two other directories that also don't appear to be transferring:\ncss and js. I've already asked Marc to look into it - dunno if he\ncame up with anything yet. It also appears to only be happening to\nthe sites that have an extra directory in their URL; eg:\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 17 Jun 1999 06:57:37 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Vince Vielhaber wrote:\n\n> On Thu, 17 Jun 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > seems something wrong with rsync mirroring.\n> > I use \n> > rsync -avz --delete hub.org::postgresql-www /d4/Web/mirrors/pgsql/\n> > and it does well but no images from /images directory are copied.\n> > So my mirror http://www.sai.msu.su:8000/ looks corrupt\n> \n> Are you on the mirrors mailing list? We're trying to get all of the\n> mirror admins on it to contact them easier. Anyway, I dropped your\n> site from automatic redirect while we figure out what happened. There\n> are two other directories that also don't appear to be transferring:\n> css and js. I've already asked Marc to look into it - dunno if he\n> came up with anything yet. It also appears to only be happening to\n> the sites that have an extra directory in their URL; eg:\n\nJust tested things out here, and it created the images directory and\nall...is rsync generating any error messages? *raised eyebrow*\n\nThere is nothing configured on hub.org to exclude the images directory...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 17 Jun 1999 09:21:59 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "> There is nothing configured on hub.org to exclude the \n> images directory...\n\nSorry in advance for the re/misdirection, but would CVSup be a\nsuitable alternative for this task? It is working *great* for me to\nreplicate the CVS tree, but it also handles normal directory trees. It\nincludes lots of compression optimizations and so is very fast...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Jun 1999 13:54:33 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Thomas Lockhart wrote:\n\n> > There is nothing configured on hub.org to exclude the \n> > images directory...\n> \n> Sorry in advance for the re/misdirection, but would CVSup be a\n> suitable alternative for this task? It is working *great* for me to\n> replicate the CVS tree, but it also handles normal directory trees. It\n> includes lots of compression optimizations and so is very fast...\n\nWhat was used before rsync that was the bandwidth hog? Wasn't cvs, was\nit?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 17 Jun 1999 09:58:22 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, Jun 17, 1999 at 01:54:33PM +0000, Thomas Lockhart wrote:\n> > There is nothing configured on hub.org to exclude the \n> > images directory...\n> \n> Sorry in advance for the re/misdirection, but would CVSup be a\n> suitable alternative for this task? It is working *great* for me to\n> replicate the CVS tree, but it also handles normal directory trees. It\n> includes lots of compression optimizations and so is very fast...\n\nPlease don;t force us to use CVSup - you have to have Modula-3 to\ncompile it, and some organisations (mine mostly included) has to start\nfrom source for all this stuff...\n\nRegards,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com/\n",
"msg_date": "Thu, 17 Jun 1999 15:07:29 +0100",
"msg_from": "Peter Galbavy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "> > Sorry in advance for the re/misdirection, but would CVSup be a\n> > suitable alternative for this task? It is working *great* for me to\n> > replicate the CVS tree, but it also handles normal directory trees. It\n> > includes lots of compression optimizations and so is very fast...\n> Please don;t force us to use CVSup - you have to have Modula-3 to\n> compile it, and some organisations (mine mostly included) has to start\n> from source for all this stuff...\n\nAnother enthusiast... ;)\n\nI'm not sure what platform you are on, but there is a *very* nice\nModula-3 rpm package from Polytechnic University in Montreal for\nlinux.\n\nAnyway, for some mirrors CVSup might be a good alternative since Marc\nis already running a server. Also, CVSup has a \"mirror sync\" mode\nwhich makes it even faster; if the mirror is run as a slave server\nthen the server leaves all of the sync info in cache and does not need\nto traverse the directory tree to deduce what should be updated.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Jun 1999 14:39:14 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "Vince,\n\nI found the problem and fixed it. The problem was because I have\nalias /images in my main site and it propagates to virtual site.\n\nbtw, http://www.postgresql.org redirected to \nhttp://www.postgresql.org/postgresql.wplus.net\nThis happens with NS 4.05, win95\n\n\tRegards,\n\n\t\tOleg\nOn Thu, 17 Jun 1999, Vince Vielhaber wrote:\n\n> Date: Thu, 17 Jun 1999 06:57:37 -0400 (EDT)\n> From: Vince Vielhaber <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] mirroring problem (www.postgresql.org)\n> \n> On Thu, 17 Jun 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > seems something wrong with rsync mirroring.\n> > I use \n> > rsync -avz --delete hub.org::postgresql-www /d4/Web/mirrors/pgsql/\n> > and it does well but no images from /images directory are copied.\n> > So my mirror http://www.sai.msu.su:8000/ looks corrupt\n> \n> Are you on the mirrors mailing list? We're trying to get all of the\n> mirror admins on it to contact them easier. Anyway, I dropped your\n> site from automatic redirect while we figure out what happened. There\n> are two other directories that also don't appear to be transferring:\n> css and js. I've already asked Marc to look into it - dunno if he\n> came up with anything yet. It also appears to only be happening to\n> the sites that have an extra directory in their URL; eg:\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 17 Jun 1999 20:12:12 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "SOLVED: Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Oleg Bartunov wrote:\n\n> Vince,\n> \n> I found the problem and fixed it. The problem was because I have\n> alias /images in my main site and it propagates to virtual site.\n> \n> btw, http://www.postgresql.org redirected to \n> http://www.postgresql.org/postgresql.wplus.net\n> This happens with NS 4.05, win95\n\nShould be back in business now. msu.su has been added back into the\nmirror list and the above was a typo that's been corrected.\n\nThanks!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 17 Jun 1999 12:21:02 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SOLVED: Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Thomas Lockhart wrote:\n\n> > There is nothing configured on hub.org to exclude the \n> > images directory...\n> \n> Sorry in advance for the re/misdirection, but would CVSup be a\n> suitable alternative for this task? It is working *great* for me to\n> replicate the CVS tree, but it also handles normal directory trees. It\n> includes lots of compression optimizations and so is very fast...\n\nRsync basically does the same thing, but cleaner...and doesn't require\nModula-3 to install. \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 17 Jun 1999 13:46:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
},
{
"msg_contents": "On Thu, 17 Jun 1999, Vince Vielhaber wrote:\n\n> On Thu, 17 Jun 1999, Thomas Lockhart wrote:\n> \n> > > There is nothing configured on hub.org to exclude the \n> > > images directory...\n> > \n> > Sorry in advance for the re/misdirection, but would CVSup be a\n> > suitable alternative for this task? It is working *great* for me to\n> > replicate the CVS tree, but it also handles normal directory trees. It\n> > includes lots of compression optimizations and so is very fast...\n> \n> What was used before rsync that was the bandwidth hog? Wasn't cvs, was\n> it?\n\nNope, ftp/mirror, except that ftp/mirror requires the client to do their\nconfiguration cleanly, whereas with rsync I can setup our end to exclude\nstuff (ie. ht/Dig databases)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 17 Jun 1999 13:47:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] mirroring problem (www.postgresql.org)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI couldn't create a table which has a primary key on numeric type.\n\ncreate table t (id numeric(7,2) primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n't_pkey' for table 't'\nERROR: Can't find a default operator class for type 1700. \n\nHow can I create an index on numeric type ? \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 17 Jun 1999 18:30:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "creating an index on numeric type"
},
{
"msg_contents": "Added to TODO:\n\n\t* Add index on NUMERIC type \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 20:32:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] creating an index on numeric type"
}
] |
[
{
"msg_contents": "it gets build \nI can initdb\nI can createdb, but not destroydb\n\na lot of \"typidTypeRelid\"errors\n\nI'm in a hurry right now, should I tell anyone else? post bug report?\n\n\n",
"msg_date": "Thu, 17 Jun 1999 11:33:13 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": true,
"msg_subject": "(don't know who else to tell) 6.5 gets build on LinuxPPCR5 but\n\tfails a lot of regr. tests"
},
{
"msg_contents": ">it gets build \n>I can initdb\n>I can createdb, but not destroydb\n>\n>a lot of \"typidTypeRelid\"errors\n>\n>I'm in a hurry right now, should I tell anyone else? post bug report?\n\nIt's a known problem with LinuxPPC R5 + PostgreSQL. Try re-compile\nalong with disabling -O2 flag.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 17 Jun 1999 18:49:29 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> >it gets build \n> >I can initdb\n> >I can createdb, but not destroydb\n> >\n> >a lot of \"typidTypeRelid\"errors\n> >\n> >I'm in a hurry right now, should I tell anyone else? post bug report?\n> \n> It's a known problem with LinuxPPC R5 + PostgreSQL. Try re-compile\n> along with disabling -O2 flag.\n\nI did't realize our template only changed -O2 to -O for linux_alpha. \nAdded for linux_ppc too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 09:44:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": ">> >it gets build \n>> >I can initdb\n>> >I can createdb, but not destroydb\n>> >\n>> >a lot of \"typidTypeRelid\"errors\n>> >\n>> >I'm in a hurry right now, should I tell anyone else? post bug report?\n>> \n>> It's a known problem with LinuxPPC R5 + PostgreSQL. Try re-compile\n>> along with disabling -O2 flag.\n>\n>I did't realize our template only changed -O2 to -O for linux_alpha. \n>Added for linux_ppc too.\n\nDon't be hurry:-) Disabling -O2 is just my guess(I don't have R5\nyet). I think his problem is related to the one reported by the\nLinuxPPC development team. If this is the case, -O is not enough, -O0\nshould be used instead. Also note that the problem would not occur for \nLinuxPPC R4(I guess this is due to the difference of compilers).\nAnyway, true fix would be as suggested in the mail (can't be fixed\ntill 6.6?).\n--\nTatsuo Ishii\n\n--------------------------------------------------------------------\nDate: Fri, 14 May 1999 14:50:58 -0400\nFrom: Jack Howarth <[email protected]>\nTo: [email protected]\nSubject: postgresql bug report\n\nMarc,\n In porting the RedHat 6.0 srpm set for a linuxppc release we\nbelieve a bug has been identified in\nthe postgresql source for 6.5-0.beta1. Our development tools are as\nfollows...\n\nglibc 2.1.1 pre 2\nlinux 2.2.6\negcs 1.1.2\nthe latest binutils snapshot\n\nThe bug that we see is that when egcs compiles postgresql at -O1 or\nhigher (-O0 is fine),\npostgresql creates incorrectly formed databases such that when the\nuser\ndoes a destroydb\nthe database can not be destroyed. Franz Sirl has identified the\nproblem\nas follows...\n\n it seems that this problem is a type casting/promotion bug in the\nsource. The\n routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\nint2eq() in\n backend/utils/adt/int.c via a function pointer\n*fmgr_faddr(&key[0].sk_func). As\n the type information for int2eq is lost via the function pointer,\nthe compiler\n passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\nint16's.\n This particular bug goes away, if I for example change int2eq to:\n\n bool\n int2eq(int32 arg1, int32 arg2)\n {\n return (int16)arg1 == (int16)arg2;\n }\n\n This moves away the type casting/promotion \"work\" from caller to\nthe\ncallee and\n is probably the right thing to do for functions used via function\npointers.\n\n...because of the large number of changes required to do this, Franz\nthought we should\npass this on to the postgresql maintainers for correction. Please feel\nfree to contact\nFranz Sirl ([email protected]) if you have any\nquestions\non this bug\nreport.\n\n--\n------------------------------------------------------------------------------\nJack W. Howarth, Ph.D. 231\nBethesda Avenue\nNMR Facility Director Cincinnati, Ohio\n45267-0524\nDept. of Molecular Genetics phone: (513)\n558-4420\nUniv. of Cincinnati College of Medicine fax: (513)\n558-8474\n",
"msg_date": "Thu, 17 Jun 1999 23:58:40 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "Someone please let me know of -O0 or -O take care of the problem.\n\n\n> >> >it gets build \n> >> >I can initdb\n> >> >I can createdb, but not destroydb\n> >> >\n> >> >a lot of \"typidTypeRelid\"errors\n> >> >\n> >> >I'm in a hurry right now, should I tell anyone else? post bug report?\n> >> \n> >> It's a known problem with LinuxPPC R5 + PostgreSQL. Try re-compile\n> >> along with disabling -O2 flag.\n> >\n> >I did't realize our template only changed -O2 to -O for linux_alpha. \n> >Added for linux_ppc too.\n> \n> Don't be hurry:-) Disabling -O2 is just my guess(I don't have R5\n> yet). I think his problem is related to the one reported by the\n> LinuxPPC development team. If this is the case, -O is not enough, -O0\n> should be used instead. Also note that the problem would not occur for \n> LinuxPPC R4(I guess this is due to the difference of compilers).\n> Anyway, true fix would be as suggested in the mail (can't be fixed\n> till 6.6?).\n> --\n> Tatsuo Ishii\n> \n> --------------------------------------------------------------------\n> Date: Fri, 14 May 1999 14:50:58 -0400\n> From: Jack Howarth <[email protected]>\n> To: [email protected]\n> Subject: postgresql bug report\n> \n> Marc,\n> In porting the RedHat 6.0 srpm set for a linuxppc release we\n> believe a bug has been identified in\n> the postgresql source for 6.5-0.beta1. Our development tools are as\n> follows...\n> \n> glibc 2.1.1 pre 2\n> linux 2.2.6\n> egcs 1.1.2\n> the latest binutils snapshot\n> \n> The bug that we see is that when egcs compiles postgresql at -O1 or\n> higher (-O0 is fine),\n> postgresql creates incorrectly formed databases such that when the\n> user\n> does a destroydb\n> the database can not be destroyed. Franz Sirl has identified the\n> problem\n> as follows...\n> \n> it seems that this problem is a type casting/promotion bug in the\n> source. The\n> routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\n> int2eq() in\n> backend/utils/adt/int.c via a function pointer\n> *fmgr_faddr(&key[0].sk_func). As\n> the type information for int2eq is lost via the function pointer,\n> the compiler\n> passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\n> int16's.\n> This particular bug goes away, if I for example change int2eq to:\n> \n> bool\n> int2eq(int32 arg1, int32 arg2)\n> {\n> return (int16)arg1 == (int16)arg2;\n> }\n> \n> This moves away the type casting/promotion \"work\" from caller to\n> the\n> callee and\n> is probably the right thing to do for functions used via function\n> pointers.\n> \n> ...because of the large number of changes required to do this, Franz\n> thought we should\n> pass this on to the postgresql maintainers for correction. Please feel\n> free to contact\n> Franz Sirl ([email protected]) if you have any\n> questions\n> on this bug\n> report.\n> \n> --\n> ------------------------------------------------------------------------------\n> Jack W. Howarth, Ph.D. 231\n> Bethesda Avenue\n> NMR Facility Director Cincinnati, Ohio\n> 45267-0524\n> Dept. of Molecular Genetics phone: (513)\n> 558-4420\n> Univ. of Cincinnati College of Medicine fax: (513)\n> 558-8474\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:00:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "At 11:00 17-6-99 -0400, Bruce Momjian wrote:\n>Someone please let me know of -O0 or -O take care of the problem.\n\n-O0 is good\n\n-O is NOT good\n( and just to make sure -O1 is NOT good either )\n\n\n",
"msg_date": "Fri, 18 Jun 1999 01:01:17 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> At 11:00 17-6-99 -0400, Bruce Momjian wrote:\n> >Someone please let me know of -O0 or -O take care of the problem.\n> \n> -O0 is good\n> \n> -O is NOT good\n> ( and just to make sure -O1 is NOT good either )\n> \n> \n> \n\nOK, should I change the template for linux_ppc to -O0?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 19:15:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "At 19:15 17-6-99 -0400, Bruce Momjian wrote:\n>> At 11:00 17-6-99 -0400, Bruce Momjian wrote:\n>> >Someone please let me know of -O0 or -O take care of the problem.\n>> \n>> -O0 is good\n>> \n>> -O is NOT good\n>> ( and just to make sure -O1 is NOT good either )\n>\n>OK, should I change the template for linux_ppc to -O0?\n\nI'm in way over my head here, don't know anything about C, don't know the\nsource code of postgres, so don't listen to me.\n( I just thought last night to try and see if I could get LinuxPPCR5 to run\non my Motorola Starmax and when that was done I thought to try and build\npostgres on it, just for fun)\n\nhow bad is it that -O2 will not work? LinuxPPCR5 probably is not one of the\nmain platforms postgres is running on.\nIf not being able to -O2 the compile is really bad for perfomance a note in\nthe INSTALL would be in order to let people know that running on LinuxPPCR5\nis not going to be a fast ride, and that the postgres dev team is aware of\nthe problem and that is being worked on :)\n\nand yes change the template for linux_ppc to -O0\n\n",
"msg_date": "Fri, 18 Jun 1999 01:46:49 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> At 19:15 17-6-99 -0400, Bruce Momjian wrote:\n> >> At 11:00 17-6-99 -0400, Bruce Momjian wrote:\n> >> >Someone please let me know of -O0 or -O take care of the problem.\n> >> \n> >> -O0 is good\n> >> \n> >> -O is NOT good\n> >> ( and just to make sure -O1 is NOT good either )\n> >\n> >OK, should I change the template for linux_ppc to -O0?\n> \n> I'm in way over my head here, don't know anything about C, don't know the\n> source code of postgres, so don't listen to me.\n> ( I just thought last night to try and see if I could get LinuxPPCR5 to run\n> on my Motorola Starmax and when that was done I thought to try and build\n> postgres on it, just for fun)\n> \n> how bad is it that -O2 will not work? LinuxPPCR5 probably is not one of the\n> main platforms postgres is running on.\n> If not being able to -O2 the compile is really bad for perfomance a note in\n> the INSTALL would be in order to let people know that running on LinuxPPCR5\n> is not going to be a fast ride, and that the postgres dev team is aware of\n> the problem and that is being worked on :)\n> \n> and yes change the template for linux_ppc to -O0\n> \n> \n\nDone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 20:45:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> OK, should I change the template for linux_ppc to -O0?\n\nNot all linux_ppc box is suffered by the problem actually, so it might\nbe over kill. However, it should definitely stop complains from\nLinuxPPC R5 users, and I have to admit it seems the best solution for\na short term.\n\nBut for the long term, we have to repair our codes. See the posting\nfrom you below.\n\nP.S.\tI don't see your addition to the TODO in the 6.5 source tree.\n--\nTatsuo ishii\n\nTo: Tom Lane <[email protected]>\nDate: Sat, 15 May 1999 05:10:51 -0400 (EDT)\nCC: The Hermit Hacker <[email protected]>, [email protected],\n Jack Howarth <[email protected]>\nX-Mailer: ELM [version 2.4ME+ PL56 (25)]\nMIME-Version: 1.0\nContent-Type: text/plain; charset=US-ASCII\nContent-Transfer-Encoding: 7bit\nSender: [email protected]\nPrecedence: bulk\nX-UIDL: bf5d0cf38a9d14744994d06f92566c16\n\n> The Hermit Hacker <[email protected]> writes:\n> > it seems that this problem is a type casting/promotion bug in the\n> > source. The\n> > routine _bt_checkkeys() in backend/access/nbtree/nbtutils.c calls\n> > int2eq() in\n> > backend/utils/adt/int.c via a function pointer\n> > *fmgr_faddr(&key[0].sk_func). As\n> > the type information for int2eq is lost via the function pointer,\n> > the compiler\n> > passes 2 ints, but int2eq expects 2 (preformatted in a 32bit reg)\n> > int16's.\n> > This particular bug goes away, if I for example change int2eq to:\n> \n> > bool\n> > int2eq(int32 arg1, int32 arg2)\n> > {\n> > return (int16)arg1 == (int16)arg2;\n> > }\n> \n> Yow. I can't believe that we haven't seen this failure before on a\n> variety of platforms. Calling an ANSI-style function that has char or\n> short args is undefined behavior if you call it without benefit of a\n> prototype, because the parameter layout is allowed to be different.\n> Apparently, fewer compilers exploit that freedom than I would've thought.\n> \n> Really, *all* of the builtin-function routines ought to take arguments\n> of type Datum and then do the appropriate Get() macro to extract what\n> they want from 'em. That's a depressingly large amount of work, but\n> at the very least the functions that take bool and int16 have to be\n> changed...\n\nI concur in your Yow. Lots of changes, and I am surprised we have not\nbeen bitten by this before. Added to TODO:\n\n\tFix function pointer calls to take Datum args for char and int2 args\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Fri, 18 Jun 1999 10:02:47 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> > OK, should I change the template for linux_ppc to -O0?\n> \n> Not all linux_ppc box is suffered by the problem actually, so it might\n> be over kill. However, it should definitely stop complains from\n> LinuxPPC R5 users, and I have to admit it seems the best solution for\n> a short term.\n> \n> But for the long term, we have to repair our codes. See the posting\n> from you below.\n> \n> P.S.\tI don't see your addition to the TODO in the 6.5 source tree.\n\nAdded:\n\n * Fix C optimizer problem where fmgr_ptr calls return different types\n\nI think I removed it because we didn't think it was a problem at one\npoint. Now we know it is. Good target for 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 21:21:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
},
{
"msg_contents": "> how bad is it that -O2 will not work? LinuxPPCR5 probably is not one of the\n> main platforms postgres is running on.\n> If not being able to -O2 the compile is really bad for perfomance a note in\n> the INSTALL would be in order to let people know that running on LinuxPPCR5\n> is not going to be a fast ride, and that the postgres dev team is aware of\n> the problem and that is being worked on :)\n\nMy vague recollection is that for other platforms (Alpha, i686) -O2 vs\n-O0 is a 30% kind of improvement on typical code (I've not measured\nthis for Postgres). Of course, some sample code which is dominated by\ntight loops with unfortunate style might show much bigger improvement,\nbut to say the least Postgres probably isn't in that category.\n\nSo it really isn't *that* big a deal until you get to large DBs or\nlarge loading.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Jun 1999 02:09:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] (don't know who else to tell) 6.5 gets build on\n\tLinuxPPCR5 but fails a lot of regr. tests"
}
] |
[
{
"msg_contents": "Config.h has this. Does this need to be updated because we can't vacuum\nmulti-segment relations? I have changed it to 7F000000:\n\n/*\n * RELSEG_SIZE is the maximum number of blocks allowed in one disk file.\n * Thus, the maximum size of a single file is RELSEG_SIZE * BLCKSZ;\n * relations bigger than that are divided into multiple files.\n *\n * CAUTION: RELSEG_SIZE * BLCKSZ must be less than your OS' limit on file\n * size. This is typically 2Gb or 4Gb in a 32-bit operating system. By\n * default, we make the limit 1Gb to avoid any possible integer-overflow\n * problems within the OS. A limit smaller than necessary only means we\n * divide a large relation into more chunks than necessary, so it seems\n * best to err in the direction of a small limit. (Besides, a power-of-2\n * value saves a few cycles in md.c.)\n *\n * CAUTION: you had best do an initdb if you change either BLCKSZ or\n * RELSEG_SIZE.\n */\n#define RELSEG_SIZE (0x40000000 / BLCKSZ)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 10:29:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "tables > 1 gig"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Config.h has this. Does this need to be updated because we can't vacuum\n> multi-segment relations? I have changed it to 7F000000:\n\nWhy? I thought we'd fixed the mdtruncate issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 10:47:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Config.h has this. Does this need to be updated because we can't vacuum\n> > multi-segment relations? I have changed it to 7F000000:\n> \n> Why? I thought we'd fixed the mdtruncate issue.\n> \n> \t\t\tregards, tom lane\n> \n\nI am told we did not by Hiroshi. It was news to me too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 10:51:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Config.h has this. Does this need to be updated because we can't vacuum\n>>>> multi-segment relations? I have changed it to 7F000000:\n>> \n>> Why? I thought we'd fixed the mdtruncate issue.\n\n> I am told we did not by Hiroshi. It was news to me too.\n\nThen we'd better fix the underlying problem. We can't change\nRELSEG_SIZE for a minor release, unless you want to give up the\nprinciple of not forcing initdb at minor releases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 11:05:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> Config.h has this. Does this need to be updated because we can't vacuum\n> >>>> multi-segment relations? I have changed it to 7F000000:\n> >> \n> >> Why? I thought we'd fixed the mdtruncate issue.\n> \n> > I am told we did not by Hiroshi. It was news to me too.\n> \n> Then we'd better fix the underlying problem. We can't change\n> RELSEG_SIZE for a minor release, unless you want to give up the\n> principle of not forcing initdb at minor releases.\n\nWhy can't we increase it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:06:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Then we'd better fix the underlying problem. We can't change\n>> RELSEG_SIZE for a minor release, unless you want to give up the\n>> principle of not forcing initdb at minor releases.\n\n> Why can't we increase it?\n\nConsider a 1.5-gig table. 6.5 will store it as one gig in file \"table\",\none-half gig in file \"table.1\". Now recompile with larger RELSEG_SIZE.\nThe file manager will now expect to find all blocks of the relation in\nfile \"table\", and will never go to \"table.1\" at all. Presto, you lost\na bunch of data.\n\nBottom line is just as it says in the config.h comments: you can't\nchange either BLCKSZ or RELSEG_SIZE without doing initdb.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 11:10:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Then we'd better fix the underlying problem. We can't change\n> >> RELSEG_SIZE for a minor release, unless you want to give up the\n> >> principle of not forcing initdb at minor releases.\n> \n> > Why can't we increase it?\n> \n> Consider a 1.5-gig table. 6.5 will store it as one gig in file \"table\",\n> one-half gig in file \"table.1\". Now recompile with larger RELSEG_SIZE.\n> The file manager will now expect to find all blocks of the relation in\n> file \"table\", and will never go to \"table.1\" at all. Presto, you lost\n> a bunch of data.\n> \n> Bottom line is just as it says in the config.h comments: you can't\n> change either BLCKSZ or RELSEG_SIZE without doing initdb.\n\nOK. I will reverse it out. I never thought that far ahead. Not sure\nhow we can fix this easily, nor do I understand why more people aren't\ncomplaining about not being able to vacuum tables that are 1.5 gigs that\nthey used to be able to vacuum.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:13:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> ... nor do I understand why more people aren't\n> complaining about not being able to vacuum tables that are 1.5 gigs that\n> they used to be able to vacuum.\n\nMost likely, not very many people with tables that big have adopted 6.5\nyet ... if I were running a big site, I'd probably wait for 6.5.1 on\ngeneral principles ;-)\n\nI think what we ought to do is finish working out how to make mdtruncate\nsafe for concurrent backends, and then do it. That's the right\nlong-term answer anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 11:22:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > ... nor do I understand why more people aren't\n> > complaining about not being able to vacuum tables that are 1.5 gigs that\n> > they used to be able to vacuum.\n> \n> Most likely, not very many people with tables that big have adopted 6.5\n> yet ... if I were running a big site, I'd probably wait for 6.5.1 on\n> general principles ;-)\n> \n> I think what we ought to do is finish working out how to make mdtruncate\n> safe for concurrent backends, and then do it. That's the right\n> long-term answer anyway.\n\nProblem is, no one knows how right now. I liked unlinking every\nsegment, but was told by Hiroshi that causes a problem with concurrent\naccess and vacuum because the old backends still think it is there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:24:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> Config.h has this. Does this need to be updated because we can't vacuum\n> >>>> multi-segment relations? I have changed it to 7F000000:\n> >> \n> >> Why? I thought we'd fixed the mdtruncate issue.\n> \n> > I am told we did not by Hiroshi. It was news to me too.\n> \n> Then we'd better fix the underlying problem. We can't change\n> RELSEG_SIZE for a minor release, unless you want to give up the\n> principle of not forcing initdb at minor releases.\n\nNo initdb for minor releases!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:30:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I think what we ought to do is finish working out how to make mdtruncate\n>> safe for concurrent backends, and then do it. That's the right\n>> long-term answer anyway.\n\n> Problem is, no one knows how right now. I liked unlinking every\n> segment, but was told by Hiroshi that causes a problem with concurrent\n> access and vacuum because the old backends still think it is there.\n\nI haven't been paying much attention, but I imagine that what's really\ngoing on here is that once vacuum has collected all the still-good\ntuples at the front of the relation, it doesn't bother to go through\nthe remaining blocks of the relation and mark everything dead therein?\nIt just truncates the file after the last block that it put tuples into,\nright?\n\nIf this procedure works correctly for vacuuming a simple one-segment\ntable, then it would seem that truncation of all the later segments to\nzero length should work correctly.\n\nYou could truncate to zero length *and* then unlink the files if you\nhad a mind to do that, but I can see why unlink without truncate would\nnot work reliably.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 11:53:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I think what we ought to do is finish working out how to make mdtruncate\n> >> safe for concurrent backends, and then do it. That's the right\n> >> long-term answer anyway.\n> \n> > Problem is, no one knows how right now. I liked unlinking every\n> > segment, but was told by Hiroshi that causes a problem with concurrent\n> > access and vacuum because the old backends still think it is there.\n> \n> I haven't been paying much attention, but I imagine that what's really\n> going on here is that once vacuum has collected all the still-good\n> tuples at the front of the relation, it doesn't bother to go through\n> the remaining blocks of the relation and mark everything dead therein?\n> It just truncates the file after the last block that it put tuples into,\n> right?\n> \n> If this procedure works correctly for vacuuming a simple one-segment\n> table, then it would seem that truncation of all the later segments to\n> zero length should work correctly.\n> \n> You could truncate to zero length *and* then unlink the files if you\n> had a mind to do that, but I can see why unlink without truncate would\n> not work reliably.\n\nThat seems like the issue. The more complex problem is that when the\nrelation lookes a segment via vacuum, things go strange on the other\nbackends. Hiroshi seems to have a good testbed for this, and I thought\nit was fixed, so I didn't notice.\n\nUnlinking allows other backends to keep their open segments of the\ntables, but that causes some problems with backends opening segments\nthey think still exist and they can't be opened.\n\nTruncating segments causes problems because backends are still accessing\ntheir own copies of the tables, and truncate modified what is seen in\ntheir open file descriptors.\n\nWe basically have two methods, and both have problems under certain\ncircumstances. I wonder if we unlink the files, but then create\nzero-length segments for the ones we unlink. If people think that may\nfix the problems, it is easy to do that, and we can do it atomically\nusing the rename() system call. Create the zero-length file under a\ntemp name, then rename it to the segment file name. That may do the\ntrick of allowing existing file descriptors to stay active, while having\nsegments in place for those that need to see them.\n\nComments?\n \n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 12:03:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> I haven't been paying much attention, but I imagine that what's really\n> going on here is that once vacuum has collected all the still-good\n> tuples at the front of the relation, it doesn't bother to go through\n> the remaining blocks of the relation and mark everything dead therein?\n> It just truncates the file after the last block that it put tuples into,\n> right?\n> \n> If this procedure works correctly for vacuuming a simple one-segment\n> table, then it would seem that truncation of all the later segments to\n> zero length should work correctly.\n\nNot sure about that. When we truncate single segment file, the table is\nbeing destroyed, so we invalidate it in the catalog cache and tell other\nbackends. Also, we have a problem with DROP TABLE in a transaction\nwhile others are using it as described by a bug report a few days ago,\nso I don't think we have that 100% either.\n\n> You could truncate to zero length *and* then unlink the files if you\n> had a mind to do that, but I can see why unlink without truncate would\n> not work reliably.\n\nThat is interesting. I never thought of that. Hiroshi, can you test\nthat idea? If it is the non-existance of the file that other backends\nare checking for, my earlier idea of rename() with truncated file kept\nin place may be better.\n\nAlso, I see why we are not getting more bug reports. They only get this\nwhen the table looses a segment, so it is OK to vacuum large tables as\nlong as the table doesn't loose a segment during the vacuum.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 12:21:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> > I haven't been paying much attention, but I imagine that what's really\n> > going on here is that once vacuum has collected all the still-good\n> > tuples at the front of the relation, it doesn't bother to go through\n> > the remaining blocks of the relation and mark everything dead therein?\n> > It just truncates the file after the last block that it put tuples into,\n> > right?\n> > \n> > If this procedure works correctly for vacuuming a simple one-segment\n> > table, then it would seem that truncation of all the later segments to\n> > zero length should work correctly.\n> \n> Not sure about that. When we truncate single segment file, the table is\n> being destroyed, so we invalidate it in the catalog cache and tell other\n> backends. Also, we have a problem with DROP TABLE in a transaction\n> while others are using it as described by a bug report a few days ago,\n> so I don't think we have that 100% either.\n> \n> That is interesting. I never thought of that. Hiroshi, can you test\n> that idea? If it is the non-existance of the file that other backends\n> are checking for, my earlier idea of rename() with truncated file kept\n> in place may be better.\n> \n> Also, I see why we are not getting more bug reports. They only get this\n> when the table looses a segment, so it is OK to vacuum large tables as\n> long as the table doesn't loose a segment during the vacuum.\n\nOK, this is 100% wrong. We truncate from vacuum any time the table size\nchanges, and vacuum of large tables will fail even if not removing a\nsegment. I forgot vacuum does this to reduce disk table size.\n\nI wonder if truncating a file to reduce its size will cause other table\nreaders to have problems. I though vacuum had an exlusive lock on the\ntable during vacuum, and if so, why are other backends having troubles?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 12:59:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Friday, June 18, 1999 12:54 AM\n> To: Bruce Momjian\n> Cc: PostgreSQL-development; [email protected]\n> Subject: Re: [HACKERS] tables > 1 gig \n> \n> \n> Bruce Momjian <[email protected]> writes:\n> >> I think what we ought to do is finish working out how to make \n> mdtruncate\n> >> safe for concurrent backends, and then do it. That's the right\n> >> long-term answer anyway.\n> \n> > Problem is, no one knows how right now. I liked unlinking every\n> > segment, but was told by Hiroshi that causes a problem with concurrent\n> > access and vacuum because the old backends still think it is there.\n> \n> I haven't been paying much attention, but I imagine that what's really\n> going on here is that once vacuum has collected all the still-good\n> tuples at the front of the relation, it doesn't bother to go through\n> the remaining blocks of the relation and mark everything dead therein?\n> It just truncates the file after the last block that it put tuples into,\n> right?\n> \n> If this procedure works correctly for vacuuming a simple one-segment\n> table, then it would seem that truncation of all the later segments to\n> zero length should work correctly.\n> \n> You could truncate to zero length *and* then unlink the files if you\n> had a mind to do that, but I can see why unlink without truncate would\n> not work reliably.\n>\n\nUnlinking unused segments after truncating to zero length may cause \nthe result such as \n\n Existent backends write to the truncated file to extend the relation\n while new backends create a new segment file to extend the relation. \n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 18 Jun 1999 10:51:38 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig "
},
{
"msg_contents": ">\n> > > I haven't been paying much attention, but I imagine that what's really\n> > > going on here is that once vacuum has collected all the still-good\n> > > tuples at the front of the relation, it doesn't bother to go through\n> > > the remaining blocks of the relation and mark everything dead therein?\n> > > It just truncates the file after the last block that it put\n> tuples into,\n> > > right?\n> > >\n> > > If this procedure works correctly for vacuuming a simple one-segment\n> > > table, then it would seem that truncation of all the later segments to\n> > > zero length should work correctly.\n> >\n> > Not sure about that. When we truncate single segment file, the table is\n> > being destroyed, so we invalidate it in the catalog cache and tell other\n> > backends. Also, we have a problem with DROP TABLE in a transaction\n> > while others are using it as described by a bug report a few days ago,\n> > so I don't think we have that 100% either.\n> >\n\nThe problem is that (virtual) file descriptors,relcache entries ... etc\nare local to each process. I don't know the certain way to tell other\nprocesses just in time that target resources should be invalidated.\n\n> > That is interesting. I never thought of that. Hiroshi, can you test\n> > that idea? If it is the non-existance of the file that other backends\n> > are checking for, my earlier idea of rename() with truncated file kept\n> > in place may be better.\n> >\n> > Also, I see why we are not getting more bug reports. They only get this\n> > when the table looses a segment, so it is OK to vacuum large tables as\n> > long as the table doesn't loose a segment during the vacuum.\n>\n> OK, this is 100% wrong. We truncate from vacuum any time the table size\n> changes, and vacuum of large tables will fail even if not removing a\n> segment. I forgot vacuum does this to reduce disk table size.\n>\n> I wonder if truncating a file to reduce its size will cause other table\n> readers to have problems.\n\nCurrent implementation has a hidden bug.\nOnce the size of a segment reached RELSEG_SIZE,mdnblocks()\nwouldn't check the real size of the segment any more.\n\nI'm not sure such other bugs doesn't exist any more.\nIt's one of the reason why I don't recommend to apply my trial patch\nto mdtruncate().\n\n> I though vacuum had an exlusive lock on the\n> table during vacuum, and if so, why are other backends having troubles?\n>\n\nWe could not see any errors by unlinking segmented relations when\ncommands are executed sequentailly.\nVacuum calls RelationInvalidateHeapTuple() for a pg_class tuple and\nand other backends could recognize that the relcachle entry must be\ninvalidated while executing StartTransaction() or CommandCounter\nIncrement().\n\nEven though the target relation is locked exclusively by vacuum,other\nbackends could StartTransaction(),CommandCounterIncrement(),\nparse,analyze,rewrite,optimize,start Executor Stage and open relations.\nWe could not rely on exclusive lock so much.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 18 Jun 1999 10:52:10 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> Unlinking unused segments after truncating to zero length may cause \n> the result such as \n> \n> Existent backends write to the truncated file to extend the relation\n> while new backends create a new segment file to extend the relation. \n\nHow about my idea of creating a truncated file, the renaming it to the\ntable file. That keeps the table open for other open file descriptors,\nbut put a zero-length file in place in an atomic manner.\n\nFact is that the current code is really bad, so I request you do your\nbest, and let's get it in there for people to review and improve if\nnecessary.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 22:30:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Friday, June 18, 1999 11:31 AM\n> To: Hiroshi Inoue\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] tables > 1 gig\n> \n> \n> > Unlinking unused segments after truncating to zero length may cause \n> > the result such as \n> > \n> > Existent backends write to the truncated file to extend \n> the relation\n> > while new backends create a new segment file to extend the \n> relation. \n> \n> How about my idea of creating a truncated file, the renaming it to the\n> table file. That keeps the table open for other open file descriptors,\n> but put a zero-length file in place in an atomic manner.\n>\n\nSorry,I couldn't understand what you mean.\nWhat is differenct from truncating existent files to zero length ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 18 Jun 1999 12:11:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> > > Unlinking unused segments after truncating to zero length may cause \n> > > the result such as \n> > > \n> > > Existent backends write to the truncated file to extend \n> > the relation\n> > > while new backends create a new segment file to extend the \n> > relation. \n> > \n> > How about my idea of creating a truncated file, the renaming it to the\n> > table file. That keeps the table open for other open file descriptors,\n> > but put a zero-length file in place in an atomic manner.\n> >\n> \n> Sorry,I couldn't understand what you mean.\n> What is differenct from truncating existent files to zero length ?\n\nGlad to explain. Here is the pseudocode:\n\n\tcreate temp file, make it zero length, call it 'zz'\n\trename(zz,tablename)\n\nWhat this does is to create a zero length file, and the rename unlinks\nthe tablename file, and puts the zero-length file in it's place. \nrename() is atomic, so there is no time that the table file does not\nexist.\n\nIt allows backends that have the table open via a descriptor to keep the\ntable unchanged, while new backends see a zero-length file.\n\nDoes this help?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 23:15:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> > >\n> > > How about my idea of creating a truncated file, the renaming it to the\n> > > table file. That keeps the table open for other open file\n> descriptors,\n> > > but put a zero-length file in place in an atomic manner.\n> > >\n> >\n> > Sorry,I couldn't understand what you mean.\n> > What is differenct from truncating existent files to zero length ?\n>\n> Glad to explain. Here is the pseudocode:\n>\n> \tcreate temp file, make it zero length, call it 'zz'\n> \trename(zz,tablename)\n>\n> What this does is to create a zero length file, and the rename unlinks\n> the tablename file, and puts the zero-length file in it's place.\n> rename() is atomic, so there is no time that the table file does not\n> exist.\n>\n\nLet\n\ti1 be the inode of zz\n\ti2 be the inode of tablename\nbefore rename().\n\nDoes this mean\n\n New backends read/write i1 inode and\n backends that have the table open read/write i2 inode ?\n\nIf so,it seems wrong.\nAll backends should see same data.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 18 Jun 1999 12:57:14 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> > What this does is to create a zero length file, and the rename unlinks\n> > the tablename file, and puts the zero-length file in it's place.\n> > rename() is atomic, so there is no time that the table file does not\n> > exist.\n> >\n> \n> Let\n> \ti1 be the inode of zz\n> \ti2 be the inode of tablename\n> before rename().\n> \n> Does this mean\n> \n> New backends read/write i1 inode and\n> backends that have the table open read/write i2 inode ?\n> \n> If so,it seems wrong.\n> All backends should see same data.\n\nYes, I can see your point. It would show them different views of the\ntable.\n\nSo, as you were saying, we have no way of invalidating file descriptors\nof other backends for secondary segments. Why does truncating the file\nnot work? Any ideas?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 00:01:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": ">\n> > > What this does is to create a zero length file, and the rename unlinks\n> > > the tablename file, and puts the zero-length file in it's place.\n> > > rename() is atomic, so there is no time that the table file does not\n> > > exist.\n> > >\n> >\n> > Let\n> > \ti1 be the inode of zz\n> > \ti2 be the inode of tablename\n> > before rename().\n> >\n> > Does this mean\n> >\n> > New backends read/write i1 inode and\n> > backends that have the table open read/write i2 inode ?\n> >\n> > If so,it seems wrong.\n> > All backends should see same data.\n>\n> Yes, I can see your point. It would show them different views of the\n> table.\n>\n> So, as you were saying, we have no way of invalidating file descriptors\n> of other backends for secondary segments.\n\nIt seems DROP TABLE has a similar problem.\nIt has been already solved ?\n\n> Why does truncating the file\n> not work? Any ideas?\n>\n\nI have gotten no bug reports for my trial implementation.\nAFAIK,only Ole Gjerde has tested my patch.\nIs it sufficient ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 18 Jun 1999 14:27:01 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> > Yes, I can see your point. It would show them different views of the\n> > table.\n> >\n> > So, as you were saying, we have no way of invalidating file descriptors\n> > of other backends for secondary segments.\n> \n> It seems DROP TABLE has a similar problem.\n> It has been already solved ?\n\nNot solved. Someone reported it recently.\n\n> \n> > Why does truncating the file\n> > not work? Any ideas?\n> >\n> \n> I have gotten no bug reports for my trial implementation.\n> AFAIK,only Ole Gjerde has tested my patch.\n> Is it sufficient ?\n\nYes. We need something, and maybe after we add it, people can do\ntesting and find any problems. It is better to apply it than to leave\nit as it currently exists, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 01:32:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> \n> > > Yes, I can see your point. It would show them different views of the\n> > > table.\n> > >\n> > > So, as you were saying, we have no way of invalidating file \n> descriptors\n> > > of other backends for secondary segments.\n> > \n> > > Why does truncating the file\n> > > not work? Any ideas?\n> > >\n> > \n> > I have gotten no bug reports for my trial implementation.\n> > AFAIK,only Ole Gjerde has tested my patch.\n> > Is it sufficient ?\n> \n> Yes. We need something, and maybe after we add it, people can do\n> testing and find any problems. It is better to apply it than to leave\n> it as it currently exists, no?\n>\n\nOK,here is my patch for PostgreSQL6.5-release.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** storage/smgr/md.c.orig\tFri Jun 11 12:20:06 1999\n--- storage/smgr/md.c\tFri Jun 18 15:10:54 1999\n***************\n*** 674,684 ****\n \tsegno = 0;\n \tfor (;;)\n \t{\n! \t\tif (v->mdfd_lstbcnt == RELSEG_SIZE\n! \t\t\t|| (nblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ)) == RELSEG_SIZE)\n \t\t{\n- \n- \t\t\tv->mdfd_lstbcnt = RELSEG_SIZE;\n \t\t\tsegno++;\n \n \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n--- 674,685 ----\n \tsegno = 0;\n \tfor (;;)\n \t{\n! \t\tnblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ);\n! \t\tif (nblocks > RELSEG_SIZE)\n! \t\t\telog(FATAL, \"segment too big in mdnblocks!\");\n! \t\tv->mdfd_lstbcnt = nblocks;\n! \t\tif (nblocks == RELSEG_SIZE)\n \t\t{\n \t\t\tsegno++;\n \n \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n***************\n*** 711,732 ****\n \tMdfdVec *v;\n \n #ifndef LET_OS_MANAGE_FILESIZE\n! \tint\t\t\tcurnblk;\n \n \tcurnblk = mdnblocks(reln);\n! \tif (curnblk / RELSEG_SIZE > 0)\n! \t{\n! \t\telog(NOTICE, \"Can't truncate multi-segments relation %s\",\n! \t\t\treln->rd_rel->relname.data);\n! \t\treturn curnblk;\n! \t}\n #endif\n \n \tfd = RelationGetFile(reln);\n \tv = &Md_fdvec[fd];\n \n \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n \t\treturn -1;\n \n \treturn nblocks;\n \n--- 712,766 ----\n \tMdfdVec *v;\n \n #ifndef LET_OS_MANAGE_FILESIZE\n! \tint\t\t\tcurnblk,\n! \t\t\t\ti,\n! \t\t\t\toldsegno,\n! \t\t\t\tnewsegno,\n! \t\t\t\tlastsegblocks;\n! \tMdfdVec\t\t\t**varray;\n \n \tcurnblk = mdnblocks(reln);\n! \tif (nblocks > curnblk)\n! \t\treturn -1;\n! \toldsegno = curnblk / RELSEG_SIZE;\n! \tnewsegno = nblocks / RELSEG_SIZE;\n! \n #endif\n \n \tfd = RelationGetFile(reln);\n \tv = &Md_fdvec[fd];\n \n+ #ifndef LET_OS_MANAGE_FILESIZE\n+ \tvarray = (MdfdVec **)palloc((oldsegno + 1) * sizeof(MdfdVec *));\n+ \tfor (i = 0; i <= oldsegno; i++)\n+ \t{\n+ \t\tif (!v)\n+ \t\t\telog(ERROR,\"segment isn't open in mdtruncate!\");\n+ \t\tvarray[i] = v;\n+ \t\tv = v->mdfd_chain;\n+ \t}\n+ \tfor (i = oldsegno; i > newsegno; i--)\n+ \t{\n+ \t\tv = varray[i];\n+ \t\tif (FileTruncate(v->mdfd_vfd, 0) < 0)\n+ \t\t{\n+ \t\t\tpfree(varray);\n+ \t\t\treturn -1;\n+ \t\t}\n+ \t\tv->mdfd_lstbcnt = 0;\n+ \t}\n+ \t/* Calculate the # of blocks in the last segment */\n+ \tlastsegblocks = nblocks - (newsegno * RELSEG_SIZE);\n+ \tv = varray[i];\n+ \tpfree(varray);\n+ \tif (FileTruncate(v->mdfd_vfd, lastsegblocks * BLCKSZ) < 0)\n+ \t\treturn -1;\n+ \tv->mdfd_lstbcnt = lastsegblocks;\n+ #else\n \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n \t\treturn -1;\n+ \tv->mdfd_lstbcnt = nblocks;\n+ #endif\n \n \treturn nblocks;\n",
"msg_date": "Fri, 18 Jun 1999 15:11:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Unlinking unused segments after truncating to zero length may cause \n> the result such as \n> Existent backends write to the truncated file to extend the relation\n> while new backends create a new segment file to extend the relation. \n\nOoh, good point. So, unless we want to invent some way for the process\nthat's running vacuum to force other backends to close their FDs for\nsegment files, the *only* correct solution is to truncate to zero length\nbut leave the files in place.\n\nI still don't quite see why there is such a big problem, however, unless\nyou're asserting that vacuum is broken for single-segment tables too.\nSurely vacuum acquires a lock over the whole table, not just a segment?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 10:42:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
},
{
"msg_contents": "\nThank you. Applied.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > \n> > > > Yes, I can see your point. It would show them different views of the\n> > > > table.\n> > > >\n> > > > So, as you were saying, we have no way of invalidating file \n> > descriptors\n> > > > of other backends for secondary segments.\n> > > \n> > > > Why does truncating the file\n> > > > not work? Any ideas?\n> > > >\n> > > \n> > > I have gotten no bug reports for my trial implementation.\n> > > AFAIK,only Ole Gjerde has tested my patch.\n> > > Is it sufficient ?\n> > \n> > Yes. We need something, and maybe after we add it, people can do\n> > testing and find any problems. It is better to apply it than to leave\n> > it as it currently exists, no?\n> >\n> \n> OK,here is my patch for PostgreSQL6.5-release.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> *** storage/smgr/md.c.orig\tFri Jun 11 12:20:06 1999\n> --- storage/smgr/md.c\tFri Jun 18 15:10:54 1999\n> ***************\n> *** 674,684 ****\n> \tsegno = 0;\n> \tfor (;;)\n> \t{\n> ! \t\tif (v->mdfd_lstbcnt == RELSEG_SIZE\n> ! \t\t\t|| (nblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ)) == RELSEG_SIZE)\n> \t\t{\n> - \n> - \t\t\tv->mdfd_lstbcnt = RELSEG_SIZE;\n> \t\t\tsegno++;\n> \n> \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n> --- 674,685 ----\n> \tsegno = 0;\n> \tfor (;;)\n> \t{\n> ! \t\tnblocks = _mdnblocks(v->mdfd_vfd, BLCKSZ);\n> ! \t\tif (nblocks > RELSEG_SIZE)\n> ! \t\t\telog(FATAL, \"segment too big in mdnblocks!\");\n> ! \t\tv->mdfd_lstbcnt = nblocks;\n> ! \t\tif (nblocks == RELSEG_SIZE)\n> \t\t{\n> \t\t\tsegno++;\n> \n> \t\t\tif (v->mdfd_chain == (MdfdVec *) NULL)\n> ***************\n> *** 711,732 ****\n> \tMdfdVec *v;\n> \n> #ifndef LET_OS_MANAGE_FILESIZE\n> ! \tint\t\t\tcurnblk;\n> \n> \tcurnblk = mdnblocks(reln);\n> ! \tif (curnblk / RELSEG_SIZE > 0)\n> ! \t{\n> ! \t\telog(NOTICE, \"Can't truncate multi-segments relation %s\",\n> ! \t\t\treln->rd_rel->relname.data);\n> ! \t\treturn curnblk;\n> ! \t}\n> #endif\n> \n> \tfd = RelationGetFile(reln);\n> \tv = &Md_fdvec[fd];\n> \n> \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n> \t\treturn -1;\n> \n> \treturn nblocks;\n> \n> --- 712,766 ----\n> \tMdfdVec *v;\n> \n> #ifndef LET_OS_MANAGE_FILESIZE\n> ! \tint\t\t\tcurnblk,\n> ! \t\t\t\ti,\n> ! \t\t\t\toldsegno,\n> ! \t\t\t\tnewsegno,\n> ! \t\t\t\tlastsegblocks;\n> ! \tMdfdVec\t\t\t**varray;\n> \n> \tcurnblk = mdnblocks(reln);\n> ! \tif (nblocks > curnblk)\n> ! \t\treturn -1;\n> ! \toldsegno = curnblk / RELSEG_SIZE;\n> ! \tnewsegno = nblocks / RELSEG_SIZE;\n> ! \n> #endif\n> \n> \tfd = RelationGetFile(reln);\n> \tv = &Md_fdvec[fd];\n> \n> + #ifndef LET_OS_MANAGE_FILESIZE\n> + \tvarray = (MdfdVec **)palloc((oldsegno + 1) * sizeof(MdfdVec *));\n> + \tfor (i = 0; i <= oldsegno; i++)\n> + \t{\n> + \t\tif (!v)\n> + \t\t\telog(ERROR,\"segment isn't open in mdtruncate!\");\n> + \t\tvarray[i] = v;\n> + \t\tv = v->mdfd_chain;\n> + \t}\n> + \tfor (i = oldsegno; i > newsegno; i--)\n> + \t{\n> + \t\tv = varray[i];\n> + \t\tif (FileTruncate(v->mdfd_vfd, 0) < 0)\n> + \t\t{\n> + \t\t\tpfree(varray);\n> + \t\t\treturn -1;\n> + \t\t}\n> + \t\tv->mdfd_lstbcnt = 0;\n> + \t}\n> + \t/* Calculate the # of blocks in the last segment */\n> + \tlastsegblocks = nblocks - (newsegno * RELSEG_SIZE);\n> + \tv = varray[i];\n> + \tpfree(varray);\n> + \tif (FileTruncate(v->mdfd_vfd, lastsegblocks * BLCKSZ) < 0)\n> + \t\treturn -1;\n> + \tv->mdfd_lstbcnt = lastsegblocks;\n> + #else\n> \tif (FileTruncate(v->mdfd_vfd, nblocks * BLCKSZ) < 0)\n> \t\treturn -1;\n> + \tv->mdfd_lstbcnt = nblocks;\n> + #endif\n> \n> \treturn nblocks;\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 12:47:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "On Fri, 18 Jun 1999, Bruce Momjian wrote:\n[snip - mdtruncate patch]\n\nWhile talking about this whole issue, there is one piece missing.\nCurrently there is no way to dump a database/table over 2 GB.\nWhen it hits the 2GB OS limit, it just silently stops and gives no\nindication that it didn't finish.\n\nIt's not a problem for me yet, but I'm getting very close. I have one\ndatabase with 3 tables over 2GB(in postgres space), but they still come\nout under 2GB after a dump. I can't do a pg_dump on the whole database\nhowever, which would be very nice.\n\nI suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do\nsomething similar to what postgres does with segments. I haven't looked\nat it yet however, so I can't say for sure.\n\nComments?\n\nOle Gjerde\n\n",
"msg_date": "Fri, 18 Jun 1999 13:25:03 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Ole Gjerde wrote:\n> \n> On Fri, 18 Jun 1999, Bruce Momjian wrote:\n> [snip - mdtruncate patch]\n> \n> While talking about this whole issue, there is one piece missing.\n> Currently there is no way to dump a database/table over 2 GB.\n> When it hits the 2GB OS limit, it just silently stops and gives no\n> indication that it didn't finish.\n> \n> It's not a problem for me yet, but I'm getting very close. I have one\n> database with 3 tables over 2GB(in postgres space), but they still come\n> out under 2GB after a dump. I can't do a pg_dump on the whole database\n> however, which would be very nice.\n> \n> I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do\n> something similar to what postgres does with segments. I haven't looked\n> at it yet however, so I can't say for sure.\n> \n> Comments?\n\nAs pg_dump writes to stdout, you can just use standard *nix tools:\n\n1. use compressed dumps\n\npg_dump really_big_db | gzip > really_big_db.dump.gz\n\nreload with\n\ngunzip -c really_big_db.dump.gz | psql newdb\nor\ncat really_big_db.dump.gz | gunzip | psql newdb\n\n2. use split\n\npg_dump really_big_db | split -b 1m - really_big_db.dump.\n\nreload with\n\ncat really_big_db.dump.* | pgsql newdb\n\n-----------------------\nHannu\n",
"msg_date": "Sat, 19 Jun 1999 12:36:18 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Then we'd better fix the underlying problem. We can't change\n> >> RELSEG_SIZE for a minor release, unless you want to give up the\n> >> principle of not forcing initdb at minor releases.\n> > Why can't we increase it?\n> Consider a 1.5-gig table. 6.5 will store it as one gig in file \"table\",\n> one-half gig in file \"table.1\". Now recompile with larger RELSEG_SIZE.\n> The file manager will now expect to find all blocks of the relation in\n> file \"table\", and will never go to \"table.1\" at all. Presto, you lost\n> a bunch of data.\n> Bottom line is just as it says in the config.h comments: you can't\n> change either BLCKSZ or RELSEG_SIZE without doing initdb.\n\nSorry for backing up so far on this thread...\n\nWould it be possible to make BLCKSZ and/or RELSEG_SIZE (the latter\nperhaps the most important, and perhaps the easiest?) a configurable\nparameter which is read out of a global variable for each database? If\nso, we could later think about moving it, along with things like\n\"default character set\", to pg_database as per-db information, and\nmake it an option on CREATE DATABASE. That kind of thing might make\nin-place upgrades easier too.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 20 Jun 1999 00:20:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Would it be possible to make BLCKSZ and/or RELSEG_SIZE (the latter\n> perhaps the most important, and perhaps the easiest?) a configurable\n> parameter which is read out of a global variable for each database?\n\nDoable, perhaps, but I'm not quite sure why it's worth the trouble...\nthere doesn't seem to be that much value in running different DBs with\ndifferent values inside a single installation. Tweaking BLCKSZ, in\nparticular, will become fairly uninteresting once we solve the tuples-\nbigger-than-a-block problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jun 1999 10:51:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tables > 1 gig "
}
] |
[
{
"msg_contents": "I've added a new directory to the src/interfaces tree which has the\nSQL3/SQL98 CLI header file and a couple of examples. It would be\ninteresting to see what it would take to graft the ecpg library onto a\nCLI interface.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Jun 1999 14:43:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "CLI interface"
}
] |
[
{
"msg_contents": "In updating my cvs tree on hub.org I noticed mention of\nsrc/interfaces/python/RCS. It is/was an empty directory in the CVS\nrepository which for some reason cvs chose to never extract. I moved\nit out of the way, and verified that cvs was happy on hub.org and that\nCVSup was happy on my remote machine. So I killed the file. btw, I\n*did* verify that it was the only instance of a file named \"RCS\" in\nthe Postgres repository. Hope this was OK.\n\nLet's see, where is Jan's signature?...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Jun 1999 14:49:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cleaned up CVS repository!"
},
{
"msg_contents": "Thus spake Thomas Lockhart\n> In updating my cvs tree on hub.org I noticed mention of\n> src/interfaces/python/RCS. It is/was an empty directory in the CVS\n\nThat was an error from the start. I mistakenly sent the entire working\ndirectory the first time and had Marc clean out the invalid files. I\nguess he forgot the directory, or I forgot to tell him.\n\nI didn't see much point including the RCS directory when it was going\nto be checked into CVS anyway.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 17 Jun 1999 12:21:04 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cleaned up CVS repository!"
}
] |
[
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Config.h has this. Does this need to be updated because we can't vacuum\n> > multi-segment relations? I have changed it to 7F000000:\n> \n> Why? I thought we'd fixed the mdtruncate issue.\n> \n> \t\t\tregards, tom lane\n> \n\nI am told we did not by Hiroshi. It was news to me too.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 10:53:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] tables > 1 gig"
}
] |
[
{
"msg_contents": "New item for TODO list:\n\n* SELECT aliname FROM pg_class aliname generates strange error\n\t\n\n\ttest=> SELECT aliname FROM pg_class aliname;\n\tNOTICE: unknown node tag 704 in rangeTableEntry_used()\n\tNOTICE: Node is: { IDENT \"aliname\" }\n\tNOTICE: unknown node tag 704 in fireRIRonSubselect()\n\tNOTICE: Node is: { IDENT \"aliname\" }\n\tERROR: copyObject: don't know how to copy 704\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 11:41:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New TODO item"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> New item for TODO list:\n>\n> * SELECT aliname FROM pg_class aliname generates strange error\n>\n>\n> test=> SELECT aliname FROM pg_class aliname;\n> NOTICE: unknown node tag 704 in rangeTableEntry_used()\n> NOTICE: Node is: { IDENT \"aliname\" }\n> NOTICE: unknown node tag 704 in fireRIRonSubselect()\n> NOTICE: Node is: { IDENT \"aliname\" }\n> ERROR: copyObject: don't know how to copy 704\n\n Without looking at anything I can tell that these NOTICE\n messages got spit out of the rewriter (I placed them there\n along with the additional NOTICE telling nodeToString()).\n\n It looks to me that the targetlist contains a bare identifier\n which the parser wasn't able to change into a Var node or\n something else. That should never be possible. A valid\n querytree cannot contain identifiers where the parser didn't\n knew from which rangetable entry they should come from.\n\n Look at the parser output (-d4) and you'll see the same\n problems the rewriter just told.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 17 Jun 1999 23:03:26 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New TODO item"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> New item for TODO list:\n> * SELECT aliname FROM pg_class aliname generates strange error\n\nYou don't need the alias; \"SELECT pg_class FROM pg_class\" generates\nthe same behavior.\n\nLooks to me like the parser is failing to reject this query as malformed.\ntransformIdent() is willing to take either a column name or a relation\nname (why?), and no one upstream is rejecting the relation-name case.\n\nEnd result is an untransformed Ident node gets left in the parser\noutput, and neither the rewriter nor the planner know what to do with\nit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 17:04:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New TODO item "
},
{
"msg_contents": "> Without looking at anything I can tell that these NOTICE\n> messages got spit out of the rewriter (I placed them there\n> along with the additional NOTICE telling nodeToString()).\n> \n> It looks to me that the targetlist contains a bare identifier\n> which the parser wasn't able to change into a Var node or\n> something else. That should never be possible. A valid\n> querytree cannot contain identifiers where the parser didn't\n> knew from which rangetable entry they should come from.\n> \n> Look at the parser output (-d4) and you'll see the same\n> problems the rewriter just told.\n\nYes. The parser should never allow this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 18:40:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New TODO item"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > New item for TODO list:\n> > * SELECT aliname FROM pg_class aliname generates strange error\n> \n> You don't need the alias; \"SELECT pg_class FROM pg_class\" generates\n> the same behavior.\n\nTODO updated.\n\n> \n> Looks to me like the parser is failing to reject this query as malformed.\n> transformIdent() is willing to take either a column name or a relation\n> name (why?), and no one upstream is rejecting the relation-name case.\n\nThere is some reason for this that I think Thomas can tell us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 18:41:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New TODO item"
},
{
"msg_contents": "> > Looks to me like the parser is failing to reject this query as malformed.\n> > transformIdent() is willing to take either a column name or a relation\n> > name (why?), and no one upstream is rejecting the relation-name case.\n> There is some reason for this that I think Thomas can tell us.\n\nMoi? Why drag me into this? ;)\n\nI'm not recalling why we would want to handle bare relation names in\nan expression, but it does seem that a flag is being set in\ntransformIdent() which one could test later to verify that you have a\ncolumn. afaik this code predates my contributions, so I don't have\nmuch insight into it. (It is true that there are a few extensions to\nthe SQL syntax which are holdovers from the PostQuel language, which\nexplains a few odd features in the parser.)\n\nWould you prefer that we do nothing until I have a chance to research\nthis some more, or is someone going to dive in?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Jun 1999 02:31:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New TODO item"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm not recalling why we would want to handle bare relation names in\n> an expression, ...\n> Would you prefer that we do nothing until I have a chance to research\n> this some more, or is someone going to dive in?\n\nResearch away. As far as I can see, this isn't affecting processing of\nany valid queries; it's just a matter of less-than-desirable response\nto an invalid one. So I think we can take our time about fixing it.\nI know I've got other things to work on...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 10:44:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] New TODO item "
}
] |
[
{
"msg_contents": "\nInstalling postgres in 20th time, I become ready to \nsend some wishes.\n\n1. My standard way to install pgsql:\n make\n su\n useradd postgres\n\n****\n mkdir ~postgres\n make install\n chown -R postgres ~postgres \n\nIs it possible to include last three commands into installation procedure?\n\n2. The most often PGDATA is ~postgres/data \n and PGLIB is ~postgres/lib\nIs it possible to use this as default if environment not set?\n\n3. Next step is adding plpgsql into database template1 (or patching creatdb\nscript) to add plpgsql every time as I create new db\n\nIs it possible to add it as configure option? (i.e. --enable-auto-plpgsql)\n\nThanks!\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 17 Jun 1999 20:03:50 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Installation procedure wishes"
},
{
"msg_contents": "Dmitry Samersoff <[email protected]> writes:\n> 1. My standard way to install pgsql:\n> make\n> su\n> useradd postgres\n> ****\n> mkdir ~postgres\n> make install\n> chown -R postgres ~postgres \n\n> Is it possible to include last three commands into installation procedure?\n\nIf you followed the installation instructions (ie, run \"make install\" as\nthe postgres user), you wouldn't need the chown step. The reason that\nmaking the toplevel installation directory isn't part of what \"make\ninstall\" does is that it's typically located somewhere that requires\nroot permission to make the directory --- but you only need to do that\nonce, it doesn't have to be done over each time you reinstall.\n\n> 2. The most often PGDATA is ~postgres/data \n> and PGLIB is ~postgres/lib\n> Is it possible to use this as default if environment not set?\n\nNot ~postgres necessarily, but whatever the --prefix set by configure\nis. I kinda thought these defaults were compiled in already? If not,\nthey probably should be.\n\n> 3. Next step is adding plpgsql into database template1 (or patching creatdb\n> script) to add plpgsql every time as I create new db\n\nThat's a one-command thing now, so I'm not seeing why it's harder to issue\nthe command than type a configure option ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Jun 1999 16:11:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishes "
},
{
"msg_contents": "> > 3. Next step is adding plpgsql into database template1 (or patching creatdb\n> > script) to add plpgsql every time as I create new db\n>\n> That's a one-command thing now, so I'm not seeing why it's harder to issue\n> the command than type a configure option ...\n\n Initially I thought it would be nice to offer those\n procedural languages that can be TRUSTED ones to regular\n users by default. So I first added the appropriate commands\n to initdb. Some complained and I moved them out again and\n added the new commands (createlang and destroylang) instead.\n\n And I agree - I was wrong. It's bad practice to install\n things by default that some don't need. A database system has\n some defined priorities:\n\n 1. Reliability\n\n 2. Reliability\n\n 3. Security\n\n 4. Reliability\n\n 5. Security\n\n n. Capabilities, performance and other non-critical items.\n\n Adding types/operators/procedural-languages by default to any\n created database is easy. Add them to the template1 database.\n If you forgot it during an upgrade or restore, be sure some\n user will tell you soon.\n\n But if you have choosen the conservative way of beeing a site\n admin, noone will ever tell you if you forgot to DISABLE a\n feature after a 50 hour restore marathon.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 18 Jun 1999 00:18:28 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishes"
},
{
"msg_contents": "> But if you have choosen the conservative way of beeing a site\n> admin, noone will ever tell you if you forgot to DISABLE a\n> feature after a 50 hour restore marathon.\n\nYes, the same reason postmaster -i flag is required to enable tcp/ip.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 19:14:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishes"
},
{
"msg_contents": "On 17-Jun-99 Tom Lane wrote:\n> Dmitry Samersoff <[email protected]> writes:\n>> 1. My standard way to install pgsql:\n>> make\n>> su\n>> useradd postgres\n>> ****\n>> mkdir ~postgres\n>> make install\n>> chown -R postgres ~postgres \n> \n>> Is it possible to include last three commands into installation procedure?\n\nI wrote this letter because last month I had to install/upgrade\npostgress over 20 times. All this questions is not significant for \nsingle instalation, but prove a task if I need installing postgres\nevery day.\n\n> \n> If you followed the installation instructions (ie, run \"make install\" as\n> the postgres user), you wouldn't need the chown step. The reason that\n> making the toplevel installation directory isn't part of what \"make\n> install\" does is that it's typically located somewhere that requires\n> root permission to make the directory --- but you only need to do that\n ^^^^^^^^^^^^^^^\nYes! IMHO, If root privilege required (in most case) at least once,\nit would be nice to make top-level directory and all other\nby install.\n\n(It's step back to discussion fired some monthes ago) \n\n> once, it doesn't have to be done over each time you reinstall.\n> \n>> 2. The most often PGDATA is ~postgres/data \n>> and PGLIB is ~postgres/lib\n>> Is it possible to use this as default if environment not set?\n> \n> Not ~postgres necessarily, but whatever the --prefix set by configure\n> is. I kinda thought these defaults were compiled in already? If not,\n> they probably should be.\n\nAFAIK, It's not compiled already in current 6.5.\nOther side, using home directory of user \"postgres\" provide a simple and\nstandard way to control postgres data and lib locations. \n\n> \n>> 3. Next step is adding plpgsql into database template1 (or patching creatdb\n>> script) to add plpgsql every time as I create new db\n> \n> That's a one-command thing now, so I'm not seeing why it's harder to issue\n> the command than type a configure option ...\n\nI still need remember to run this command for template1 or do it every time \nwhen I creating db. \n\nConfigure options allow me to add it to internal-used build script\nand don\\'t keep it in mind.\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Fri, 18 Jun 1999 10:55:09 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Installation procedure wishes"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> > But if you have choosen the conservative way of beeing a site\n> > admin, noone will ever tell you if you forgot to DISABLE a\n> > feature after a 50 hour restore marathon.\n>\n> Yes, the same reason postmaster -i flag is required to enable tcp/ip.\n\n That's a detail I'm in doubt about. Our defaults for AF_UNIX\n sockets is trust (and AFAIK must be because identd cannot\n handle them). Thus any user who has a local shell account\n could easily become db user postgres.\n\n I think a default of host-localhost-ident-sameuser and giving\n superusers the builtin right to become everyone would gain\n higher security.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 18 Jun 1999 10:12:14 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishes"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> >\n> > > But if you have choosen the conservative way of beeing a site\n> > > admin, noone will ever tell you if you forgot to DISABLE a\n> > > feature after a 50 hour restore marathon.\n> >\n> > Yes, the same reason postmaster -i flag is required to enable tcp/ip.\n> \n> That's a detail I'm in doubt about. Our defaults for AF_UNIX\n> sockets is trust (and AFAIK must be because identd cannot\n> handle them). Thus any user who has a local shell account\n> could easily become db user postgres.\n> \n> I think a default of host-localhost-ident-sameuser and giving\n> superusers the builtin right to become everyone would gain\n> higher security.\n\nBut can we assume ident is running. I don't think so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 12:48:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishest"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> That's a detail I'm in doubt about. Our defaults for AF_UNIX\n>> sockets is trust (and AFAIK must be because identd cannot\n>> handle them). Thus any user who has a local shell account\n>> could easily become db user postgres.\n>> \n>> I think a default of host-localhost-ident-sameuser and giving\n>> superusers the builtin right to become everyone would gain\n>> higher security.\n\n> But can we assume ident is running. I don't think so.\n\nNo, we cannot make the default installation dependent on any nonstandard\nsoftware. Jan's right, though: the default setup is not secure against\nlocal attackers.\n\nPerhaps we ought to make the default setup use password protection?\nThat would at least force people to take extra steps to open themselves\nto easy attack.\n\nThere is still the issue of allowing the superuser to become everyone.\nRight now, a pg_dump -z script is extremely painful to run if the\nprotection setup requires passwords (I am not sure it even works, but\ncertainly having to enter a password at each ownership swap would be\nno fun). It wouldn't work at all under ident authorization. I think\nwe need some sort of \"real vs effective userid\" scheme to allow a\nsuperuser-started session to switch to any userid without requiring a\npassword. (Maybe that's the same thing Jan has in mind.)\n\nAlso, it's pointless to pretend we have much security against local\nattackers as long as the socket file is being created in /tmp.\nOn a system that doesn't have \"sticky bits\" for directories, a local\nattacker could substitute his own socket file and then spoof the\nprotocol to steal legitimate users' passwords... I recall we discussed\nmoving the socket location to a directory only writable by postgres,\nbut didn't get around to doing anything about it.\n\nTo run a really secure server on a machine where you didn't trust all\nthe local users, without the annoyance of passwords, you'd need to set\nup host-localhost-ident-sameuser *and* disable access through the\nAF_UNIX socket. Is that possible now? (I guess you could configure\nhost localhost reject ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 13:36:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Installation procedure wishest "
}
] |
[
{
"msg_contents": "\n> > If now we'll add 6 bytes to header then \n> > offsetof(HeapTupleHeaderData, t_bits) will be 32 and for\n> > no-nulls tuples there will be no difference at all\n> > (with/without additional 6 bytes), due to double alignment\n> > of header. So, the choice is: new feature or more compact\n> > (than current) header for tuples with nulls.\n> \n> That's a tough one. What do other DB's have for row overhead?\n> \nInformix has a per page overhead of 36 bytes (per 4k or 2k page) \n+ 4 bytes per row and page\nData resides on: Home page, big remainder page, remainder page. \nno length or precision byte per column for fixed width/precision fields.\nIt has a maximum of 255 rows per page (4k page, 11 bytes min rowsize)\n\nAndreas\n",
"msg_date": "Thu, 17 Jun 1999 18:16:26 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Savepoints... (row overhead)"
}
] |
[
{
"msg_contents": "\nthis might be something new that I didn't see go through, or, at least,\nget implemented...but, under a Linux system, fresh install this morning,\nof v6.5:\n\n19130 p2 S 0:00 /usr/local/pgsql/bin/postmaster -o -F -o /usr/local/pgsql/er\n19416 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n19418 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n19425 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n21163 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n21288 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n21290 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n21303 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n21445 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle \n\nyet nobody is using that system yet except me testing a couple of times...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 17 Jun 1999 14:20:24 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "'idle' processes in v6.5?"
},
{
"msg_contents": "> ... under a Linux system ...\n ^^^^^\n\nAh, finally seen the light...\nYou didn't expect _that_ to go unnoticed, did you?\n\n> 19130 p2 S 0:00 /usr/local/pgsql/bin/postmaster -o -F -o /usr/local/pgsql/er\n> 19416 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 19418 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 19425 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 21163 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 21288 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 21290 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 21303 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> 21445 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> yet nobody is using that system yet except me testing a couple of times...\n\nYes, \"nobody\" *is* using the system. You sure you don't have a web\nserver or something? Anything else running as \"nobody\"?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Jun 1999 01:39:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 'idle' processes in v6.5?"
},
{
"msg_contents": "On Fri, 18 Jun 1999, Thomas Lockhart wrote:\n\n> > ... under a Linux system ...\n> ^^^^^\n> \n> Ah, finally seen the light...\n> You didn't expect _that_ to go unnoticed, did you?\n\nI'm getting paid *well* to put up with it :)\n\n> > 19130 p2 S 0:00 /usr/local/pgsql/bin/postmaster -o -F -o /usr/local/pgsql/er\n> > 19416 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 19418 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 19425 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 21163 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 21288 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 21290 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 21303 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > 21445 p2 S 0:00 /usr/local/pgsql/bin/postgres 127.0.0.1 nobody imp idle\n> > yet nobody is using that system yet except me testing a couple of times...\n> \n> Yes, \"nobody\" *is* using the system. You sure you don't have a web\n> server or something? Anything else running as \"nobody\"?\n\nYa, but no connections through it yet...does PHP auto-open connections to\nthe backend as a sort of cache/pool?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 17 Jun 1999 22:59:52 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 'idle' processes in v6.5?"
},
{
"msg_contents": "> > > ... under a Linux system ...\n> > ^^^^^\n> > Ah, finally seen the light...\n> > You didn't expect _that_ to go unnoticed, did you?\n> I'm getting paid *well* to put up with it :)\n\nWell, that story will work for a while...\n\n> > Yes, \"nobody\" *is* using the system. You sure you don't have a web\n> > server or something? Anything else running as \"nobody\"?\n> Ya, but no connections through it yet...does PHP auto-open connections to\n> the backend as a sort of cache/pool?\n\nafaik, yes. Haven't run it myself though. I imagine that there are\ntunable parameters for that sort of thing.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Jun 1999 02:22:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 'idle' processes in v6.5?"
},
{
"msg_contents": "On Fri, Jun 18, 1999 at 02:22:13AM +0000, Thomas Lockhart wrote:\n> > > > ... under a Linux system ...\n> > > ^^^^^\n> > > Ah, finally seen the light...\n> > > You didn't expect _that_ to go unnoticed, did you?\n> > I'm getting paid *well* to put up with it :)\n> \n> Well, that story will work for a while...\n> \n> > > Yes, \"nobody\" *is* using the system. You sure you don't have a web\n> > > server or something? Anything else running as \"nobody\"?\n> > Ya, but no connections through it yet...does PHP auto-open connections to\n> > the backend as a sort of cache/pool?\n> \n> afaik, yes. Haven't run it myself though. I imagine that there are\n> tunable parameters for that sort of thing.\n\nNo, no auto-open connections. From the database name (imp), I\nwould guess that you are running the IMAP web frontend named IMP\nwhich is using persistent connections to the database - these\naren't closed automatically, so they hang around idle.\n\n-- \n\n Regards,\n\n Sascha Schumann\n Consultant\n",
"msg_date": "Fri, 18 Jun 1999 12:56:08 +0200",
"msg_from": "Sascha Schumann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 'idle' processes in v6.5?"
}
] |
[
{
"msg_contents": "\nPlease add to chapter \n Shared Memory and SHMMAX\n\n\"\nYou also have to decrease number of maximum backends\nallowed to run simultaneously to half of B value \nusing -N options\n\nFor example:\n postmaster -B 24 -N 12\n\n \"\n\nor something like. \n\nPS: \n Is list of recognized command line options anywhere in documentation?\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Thu, 17 Jun 1999 22:51:13 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "SCO FAQ changes"
},
{
"msg_contents": "Andrew, are you listening to the SCO discussion? Are you\ninterested/willing/able to incorporate changes like this, or if not\nperhaps you can suggest to folks how you would like them to handle it?\n\n> Is list of recognized command line options anywhere in \n> documentation?\n\nBut of course, new for v6.5. Look in the User's Guide reference pages,\ntoward the end, for the entries on \"postmaster\" and \"postgres\".\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Jun 1999 01:55:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SCO FAQ changes"
}
] |
[
{
"msg_contents": "It doesn't seem to hurt anything but when I list sequences with \\ds I\nget a strange behaviour. If I created the sequence as an ordinary\nuser everything seems normal. If I create the sequence as a super\nuser (myself) it lists the sequence name twice. As I said, it doesn't\nseem to hurt anything. I'll try to look into it next week if I can\nget some time and no one else has figured it out.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 17 Jun 1999 16:40:22 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "This is weird"
},
{
"msg_contents": "I think your pg_shadow table has two superusers in it.\n\n\n> It doesn't seem to hurt anything but when I list sequences with \\ds I\n> get a strange behaviour. If I created the sequence as an ordinary\n> user everything seems normal. If I create the sequence as a super\n> user (myself) it lists the sequence name twice. As I said, it doesn't\n> seem to hurt anything. I'll try to look into it next week if I can\n> get some time and no one else has figured it out.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 18:39:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] This is weird"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> I think your pg_shadow table has two superusers in it.\n> \n> \n> > It doesn't seem to hurt anything but when I list sequences with \\ds I\n> > get a strange behaviour. If I created the sequence as an ordinary\n> > user everything seems normal. If I create the sequence as a super\n> > user (myself) it lists the sequence name twice. As I said, it doesn't\n> > seem to hurt anything. I'll try to look into it next week if I can\n> > get some time and no one else has figured it out.\n\nGive that man a seegar. Now how in hell did I get in there twice with\nthe same name and same ID? Doesn't createuser check that?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 17 Jun 1999 22:28:58 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] This is weird"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > I think your pg_shadow table has two superusers in it.\n> > \n> > \n> > > It doesn't seem to hurt anything but when I list sequences with \\ds I\n> > > get a strange behaviour. If I created the sequence as an ordinary\n> > > user everything seems normal. If I create the sequence as a super\n> > > user (myself) it lists the sequence name twice. As I said, it doesn't\n> > > seem to hurt anything. I'll try to look into it next week if I can\n> > > get some time and no one else has figured it out.\n> \n> Give that man a seegar. Now how in hell did I get in there twice with\n> the same name and same ID? Doesn't createuser check that?\n\nMy guess is pg_dumpall did it somehow, perhaps a failed load. Do I get\nanother cigar?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Jun 1999 22:32:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] This is weird"
}
] |
[
{
"msg_contents": "> (It is true that there are a few extensions to\n> the SQL syntax which are holdovers from the PostQuel language, which\n> explains a few odd features in the parser.)\n> \n> Would you prefer that we do nothing until I have a chance to research\n> this some more, or is someone going to dive in?\n> \nIMHO a tablename after select is only valid if there is a point and\nattribute or function after the tablename because postgresql handles\nqueries of the form:\n\nselect t1.eval;\nselect t1.*;\n\nWhere eval can be a column of the t1 table or a function accepting\none opaque argument. The function is automatically passed each \nrow of t1. This is the important feature.\n\nAndreas\n",
"msg_date": "Fri, 18 Jun 1999 11:04:01 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New TODO item"
}
] |
[
{
"msg_contents": "has something changed with r-tree indexes in 6.5? i had a database in\n6.4.2 that had a box type and would do queries on it with the box\noverlap operator (\"&&\"). the field i'm querying is indexed with an\nr-tree index. all this worked fine in 6.4.2. then i did a dump &\nrestore from 6.4.2 to 6.5 and now the queries that were running ok in\n6.4.2 aren't in 6.5. if i drop the index, everything is fine (albeit\nslow). this is the error i'm getting:\n\nERROR: Operator 500 must have a restriction selectivity estimator to be\nused in a btree index\n\nwhich doesn't make sense because it's an r-tree index, first of all. \nand second, i don't know what to look for (especially since this was\nworking great with 6.4.2).\n\ni'm running on a linux system, compiled with a pretty recent egcs. the\ndiffs in the regression tests didn't seem to be anything major. the\nonly configuration option i selected was --with-perl when i compiled it,\nso so everything should be pretty vanilla. (i haven't even added in any\nof my custom types yet, which have been known to mess with operators in\nthe past.)\n\nit's really easy to reproduce the error:\n\ncreate table test_table (area1 box);\ninsert into test_table values ( '(100,100,200,200)'::box );\ncreate index test_table_index on test_table using rtree ( area1 box_ops\n);\nselect * from test_table where area1 && '(0,0),(100,100)'::box;\ndrop index test_table_index;\nselect * from test_table where area1 && '(0,0),(100,100)'::box;\n\nany ideas?\n\njeff\n\nps, is it just me, or do things seem a little dead around here today?\n",
"msg_date": "Fri, 18 Jun 1999 14:13:06 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> ERROR: Operator 500 must have a restriction selectivity estimator to be\n> used in a btree index\n\nI put in that error check, so I must be guilty :-(. I didn't think it\nshould affect r-trees either. I'll take a look.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 18:57:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> has something changed with r-tree indexes in 6.5?\n> ERROR: Operator 500 must have a restriction selectivity estimator to be\n> used in a btree index\n\nWhat we have here is a big OOOPS.\n\nThe reference to \"btree\" is coming from btreesel, which it turns out is\ncalled by rtreesel as well --- I missed that the first time around.\nBut that's just a cosmetic problem in the error message. The real\nproblem is that none of the operators that are used for rtree indexes\nhave restriction selectivity estimators.\n\nThey *used* to have selectivity estimators in 6.4.2 --- they all pointed\nat intltsel, which is pretty much completely inappropriate for an area-\nbased comparison. I believe I must have removed those links from the\npg_operator table during one of my cleanup-and-make-consistent passes.\n\nThe right fix would be to put in an appropriate selectivity estimator,\nbut we can't do that as a 6.5.* patch because changing pg_operator\nrequires an initdb. It will have to wait for 6.6. (One of my to-do\nitems for 6.6 was to rewrite the selectivity estimators anyway, so I'll\nsee what I can do.) In the meantime, I think the only possible patch is\nto disable the error check in btreesel and have it return a default\nselectivity estimate instead of complaining. Drat.\n\nApparently, none of the regression tests exercise rtree indexes at all,\nelse we'd have known there was a problem. Adding an rtree regression test\nseems to be strongly indicated as well...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 19:35:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "> The right fix would be to put in an appropriate selectivity estimator,\n> but we can't do that as a 6.5.* patch because changing pg_operator\n> requires an initdb. It will have to wait for 6.6. (One of my to-do\n> items for 6.6 was to rewrite the selectivity estimators anyway, so I'll\n> see what I can do.) In the meantime, I think the only possible patch is\n> to disable the error check in btreesel and have it return a default\n> selectivity estimate instead of complaining. Drat.\n> \n> Apparently, none of the regression tests exercise rtree indexes at all,\n> else we'd have known there was a problem. Adding an rtree regression test\n> seems to be strongly indicated as well...\n\nSounds like a good fix. Bypass the system tables, since we can't change\nthem, and hard-wire a selectivity.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 20:16:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "> Jeff Hoffmann <[email protected]> writes:\n> > has something changed with r-tree indexes in 6.5?\n> > ERROR: Operator 500 must have a restriction selectivity estimator to be\n> > used in a btree index\n>\n> ...\n>\n> The right fix would be to put in an appropriate selectivity estimator,\n> but we can't do that as a 6.5.* patch because changing pg_operator\n> requires an initdb. It will have to wait for 6.6.\n\nWould it possible to write an sql script to put in the changes?\n\nSeems it should be possible to do this for any non-code changes\nthat affect the system tables.\n\nDarren\n\n",
"msg_date": "Fri, 18 Jun 1999 20:27:16 -0400",
"msg_from": "\"Stupor Genius\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Jeff Hoffmann <[email protected]> writes:\n> > > has something changed with r-tree indexes in 6.5?\n> > > ERROR: Operator 500 must have a restriction selectivity estimator to be\n> > > used in a btree index\n> >\n> > ...\n> >\n> > The right fix would be to put in an appropriate selectivity estimator,\n> > but we can't do that as a 6.5.* patch because changing pg_operator\n> > requires an initdb. It will have to wait for 6.6.\n> \n> Would it possible to write an sql script to put in the changes?\n> \n> Seems it should be possible to do this for any non-code changes\n> that affect the system tables.\n\nYes, we could, and have in the past had SQL scripts that are run as part\nof the upgrade, but this fix is easier to do in C because it doesn't\nrequire that difficult step for upgraders.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 21:04:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "> has something changed with r-tree indexes in 6.5?\n> any ideas?\n\nrevision 1.29\ndate: 1999/05/31 19:32:47; author: tgl; state: Exp; lines: +61 -5\nGenerate a more specific error message when an operator used\nin an index doesn't have a restriction selectivity estimator.\n\nTom, was there anything more here than the new elog error exit itself?\nIt used to ignore the missing estimator, or fail farther in to the\ncode?\n\n> ps, is it just me, or do things seem a little dead around here today?\n\nAt your office, or on the list? Most of us were waiting for your\nmessage ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Jun 1999 02:11:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "> What we have here is a big OOOPS.\n> The right fix would be to put in an appropriate selectivity estimator,\n> but we can't do that as a 6.5.* patch because changing pg_operator\n> requires an initdb. It will have to wait for 6.6. (One of my to-do\n> items for 6.6 was to rewrite the selectivity estimators anyway, so I'll\n> see what I can do.)\n\nUh, I think we *should* do it as a patch, just not one applied to the\ncvs tree for the v.6.5.x branch. Let's apply it to the main cvs branch\nonce we do the split, and Jeff can use a snapshot at that time (since\nit will strongly resemble v6.5 and since he wants the capability).\n\nIn the meantime, can you/we develop a set of patches for Jeff to use?\nOnce we have them, we can post them into\nftp://postgresql.org/pub/patches, which probably needs to be cleaned\nout from the v6.4.x period.\n\nLet me know if I can help with any of this...\n\n> In the meantime, I think the only possible patch is\n> to disable the error check in btreesel and have it return a default\n> selectivity estimate instead of complaining. Drat.\n\n... and let's use this solution for the v6.5.x branch, once it comes\ninto being.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Jun 1999 02:20:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> In the meantime, I think the only possible patch is\n>> to disable the error check in btreesel and have it return a default\n>> selectivity estimate instead of complaining. Drat.\n\n> ... and let's use this solution for the v6.5.x branch, once it comes\n> into being.\n\nI've already done that, committed it, and posted the patch on\npgsql-patches. We can reverse out the patch after something's\nbeen done to provide reasonable selectivity estimates for rtrees.\n\nApplying intltsel, as 6.4 did, was so bogus that it's difficult\nto argue that the resulting numbers were better than the 0.5\ndefault estimate I just put into btreesel ;-) ... so I feel no\nspecial desire to return to the status quo ante. I have a to-do\nlist item to look at the whole selectivity estimation business,\nand I will try to figure out something reasonable for rtrees\nwhile I'm at it. It may be a while before that gets to the top\nof the to-do list (unless someone else gets to it before I do),\nbut I think this patch will do fine until then.\n\nMostly I'm embarrassed that we didn't notice the problem during\nbeta testing :-(. No regression test, and no users of rtrees\nin the beta population either, it would seem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 22:36:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "> > In the meantime, I think the only possible patch is\n> > to disable the error check in btreesel and have it return a default\n> > selectivity estimate instead of complaining. Drat.\n> \n> ... and let's use this solution for the v6.5.x branch, once it comes\n> into being.\n\nHe already did this, Thomas. The 6.5.x branch is currently our only\nactive branch. Any patches now applied appear in 6.5.x and 6.6. We\nwill split it when someone needs to start on 6.6-only features.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 22:55:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> date: 1999/05/31 19:32:47; author: tgl; state: Exp; lines: +61 -5\n> Generate a more specific error message when an operator used\n> in an index doesn't have a restriction selectivity estimator.\n\n> Tom, was there anything more here than the new elog error exit itself?\n> It used to ignore the missing estimator, or fail farther in to the\n> code?\n\nThat code useta look something like\n\n\tfmgr(get_oprrest(operatorOID), ...)\n\nso that if get_oprrest returned 0 you'd get an error message along the\nlines of \"fmgr: no function cache entry for OID 0\". This was pretty\nunhelpful, of course, and someone complained about it a few weeks ago;\nso I added a test for missing oprrest. That wasn't what broke things\n... what broke things was my removal of seemingly bogus oprrest links\nfrom pg_operator, which I think I did on 4/10:\n\nrevision 1.56\ndate: 1999/04/10 23:53:00; author: tgl; state: Exp; lines: +99 -99\nFix another batch of bogosities in pg_operator table.\nThese were bogus selectivity-estimator links, like a '>' operator\npointing to intltsel when it should use intgtsel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 1999 22:57:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "> \n> Applying intltsel, as 6.4 did, was so bogus that it's difficult\n> to argue that the resulting numbers were better than the 0.5\n> default estimate I just put into btreesel ;-) ... so I feel no\n> special desire to return to the status quo ante. I have a to-do\n> list item to look at the whole selectivity estimation business,\n> and I will try to figure out something reasonable for rtrees\n> while I'm at it. It may be a while before that gets to the top\n> of the to-do list (unless someone else gets to it before I do),\n> but I think this patch will do fine until then.\n> \n> Mostly I'm embarrassed that we didn't notice the problem during\n> beta testing :-(. No regression test, and no users of rtrees\n> in the beta population either, it would seem.\n> \n\nNo reason to be emarrassed. This 6.5 release is our smoothest yet. \nSometimes, we had some pretty major initial problems, and at this stage,\nwe would be figuring out when we needed to get the next subrelease out.\n\nWe are sitting around at this point, just tweeking things, and have no\nmajor need to rush into a minor release to fix problems because we don't\nhave a flood of identical bug reports that have users screaming.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Jun 1999 23:11:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "At 11:11 PM 6/18/99 -0400, Bruce Momjian wrote:\n\n>We are sitting around at this point, just tweeking things, and have no\n>major need to rush into a minor release to fix problems because we don't\n>have a flood of identical bug reports that have users screaming.\n\nActually, I noticed this also. I don't have experience with your\nearlier release attempts, but there were enough \"wink, wink - we'll\nbe busy chasing initial release bugs\" type comments to make me wonder.\n\nThis release is ... somewhat mature. And you seem to be capturing\nareas that regression testing doesn't touch. I don't have to tell\nyou that the writing of tests is of crucial importance (though of\ncourse entirely inadequate!). \n\nI'm really impressed with this release...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Fri, 18 Jun 1999 21:18:42 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "I read through some of the papers about R-trees and GIST about a year\nago,\nand it seems that estimating costs for R-tree searches (and GIST\nsearches) is\nnot so straightforward as B-Trees. \n\nHellerstein et al. 1995 write \n\t\"...currently such estimates are reasonably accurate for B+ trees\nand \tless so for R-Trees. Recently, some work on R-tree cost\nestimation \t\thas been done by [FK94], but more work is required to bring\nthis to \t\tbear on GISTs in general....\" \n\nThe reference that they give is \n\n[FK94] Christos Faloutsos and Ibrahim Kamel. \"Beyond Uniformity and\nIndependence: Analysis of R-trees using the concept of fractal\ndimension.\nProc. 13th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database\nSystems, pp 4--13, Minneapolis, May 1994\n\n\nI don't have the Faloustos paper. The R-tree code authors, and the GIST\nauthors just used the B-Tree code as an expedient solution. \n\nBernie Frankpitt\n",
"msg_date": "Sat, 19 Jun 1999 21:12:11 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "On Sat, Jun 19, 1999 at 09:12:11PM +0000, Bernard Frankpitt wrote:\n> I read through some of the papers about R-trees and GIST about a year\n> ago,\n> and it seems that estimating costs for R-tree searches (and GIST\n> searches) is\n> not so straightforward as B-Trees. \n> \n> Hellerstein et al. 1995 write \n> \t\"...currently such estimates are reasonably accurate for B+ trees\n> and \tless so for R-Trees. Recently, some work on R-tree cost\n> estimation \t\thas been done by [FK94], but more work is required to bring\n> this to \t\tbear on GISTs in general....\" \n> \n> The reference that they give is \n> \n> [FK94] Christos Faloutsos and Ibrahim Kamel. \"Beyond Uniformity and\n> Independence: Analysis of R-trees using the concept of fractal\n> dimension.\n> Proc. 13th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database\n> Systems, pp 4--13, Minneapolis, May 1994\n> \nHmm, a quick Google search for these two authors hit on a great index server\nin Germany:\nhttp://www.informatik.uni-trier.de/~ley/db/index.html\nhttp://www.informatik.uni-trier.de/~ley/db/indices/a-tree/f/Faloutsos:Christos.htm\n\nAnd that paper in particular:\nhttp://www.informatik.uni-trier.de/~ley/db/conf/pods/pods94-4.html\n\nWhich gives an abstract, access to an electronic version (ACS membership\nrequired) and a cite for a more recent (1997) journal paper.\n\nHTH,\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 21 Jun 1999 00:07:32 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jeff Hoffmann <[email protected]> writes:\n> > has something changed with r-tree indexes in 6.5?\n> > ERROR: Operator 500 must have a restriction selectivity estimator to be\n> > used in a btree index\n> \n> What we have here is a big OOOPS.\n\ni guess so. the patch works fine, though, so no big deal. i thought it\nwas weird that i hadn't heard it come up before since it didn't seem\nlike something i could have caused, but you never know.\n\n> Apparently, none of the regression tests exercise rtree indexes at all,\n> else we'd have known there was a problem. Adding an rtree regression test\n> seems to be strongly indicated as well...\n\ni noticed this when i ran the regression tests and everything came out\nok, but forgot to mention it. if i recall correctly, what's actually in\nthe geometry regression test is pretty weak. i think it only really\ntests some of the common cases, not all of the functions. it's probably\nnot a high priority item, though, since, judging by how long it took for\nthis bug to surface, there aren't a lot of people using the geometry\nfunctions/types.\n",
"msg_date": "Mon, 21 Jun 1999 10:41:09 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> Tom Lane wrote:\n>> Apparently, none of the regression tests exercise rtree indexes at all,\n>> else we'd have known there was a problem. Adding an rtree regression test\n>> seems to be strongly indicated as well...\n\n> i noticed this when i ran the regression tests and everything came out\n> ok, but forgot to mention it. if i recall correctly, what's actually in\n> the geometry regression test is pretty weak. i think it only really\n> tests some of the common cases, not all of the functions. it's probably\n> not a high priority item, though, since, judging by how long it took for\n> this bug to surface, there aren't a lot of people using the geometry\n> functions/types.\n\nThat's exactly why we need a more thorough regression test. The core\ndevelopers aren't doing much with the geometry operations, and evidently\nneither are any of the frontline beta testers. So, if the regression\ntests don't cover the material either, we stand a good chance of\nbreaking things and not even knowing it --- which is exactly what\nhappened here.\n\nIt seems that you do make use of the geometry operations; perhaps\nyou would be willing to work up some more-thorough regression tests?\nYou're certainly better qualified to do it than I am...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jun 1999 17:40:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> That's exactly why we need a more thorough regression test. The core\n> developers aren't doing much with the geometry operations, and evidently\n> neither are any of the frontline beta testers. So, if the regression\n> tests don't cover the material either, we stand a good chance of\n> breaking things and not even knowing it --- which is exactly what\n> happened here.\n> \n> It seems that you do make use of the geometry operations; perhaps\n> you would be willing to work up some more-thorough regression tests?\n> You're certainly better qualified to do it than I am...\n> \n\nwell, depending on how complete you want the regression tests, this\ncould be fairly easy. after a quick look at the tests, it seems like\nthe only type that is really left out is line (which i don't know if\nthere are any native operators for it anyway, all i know about are the\nones for lsegs). just a simple select for the forgotten operators in\nwith the test of other operators for each type would be an improvement. \ni think all of the functions are covered in at least one place. again,\nthough, i think everything that i use on a regular basis is covered in\nthe regression test. so overall it's really not that bad. except, of\ncourse, for the bug i uncovered. there actually is a place where an\nrtree index is created, but nothing is every selected against it, which\nis what caused this error to go unnoticed. i haven't looked closely\nenough at parts of the other regression tests to see if there are any\nselects where indexes come in to play, but it'd be a good idea to make\nsure indexes are actually used in the tests for all access methods (and\nop classes? - i can't really imagine when this would be a problem, but\nwho knows). \n\ni'll try updating some of the dedicated tests (box.sql, circle.sql,\ngeometry.sql, lseg.sql, path.sql, polygon.sql), but i'm not sure where\ntesting the rtree indexes should go. i think other index types are\ntested in select.sql, but i'd probably put them in geometry.sql. does\nanybody care? is there someone that oversees the methods and\norganization of the regression tests or do people just throw in new\ntests when there's something new?\n",
"msg_date": "Tue, 22 Jun 1999 09:37:29 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
},
{
"msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> i'll try updating some of the dedicated tests (box.sql, circle.sql,\n> geometry.sql, lseg.sql, path.sql, polygon.sql), but i'm not sure where\n> testing the rtree indexes should go. i think other index types are\n> tested in select.sql, but i'd probably put them in geometry.sql. does\n> anybody care? is there someone that oversees the methods and\n> organization of the regression tests or do people just throw in new\n> tests when there's something new?\n\nAFAIK we have no regression-test-meister (though we should). Do what\nseems reasonable.\n\nAbout the only stylistic thing I'd suggest is to try to avoid machine\ndependent results. For example, the existing geometry.sql test causes\na lot of uninteresting comparison failures on many machines because of\nsmall variations in roundoff error. So, if you can exercise a feature\nusing only exact-integral inputs and results, do it that way rather than\nmaking up \"realistic\" test data. (A lot of people would be very happy\nif you could revise this problem away in geometry.sql ... but if that\nseems like more work than you bargained for, don't worry about it.\nExtending the test coverage is the high-priority task, I think.)\n\nI'd be inclined to agree that rtree indexes should be tested in one\nof the existing geometry-related tests, or perhaps in a brand new\nregression test, rather than sticking them into the generic select.sql\ntest.\n\nThanks for taking a shot at it!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jun 1999 11:17:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5? "
},
{
"msg_contents": "> i'll try updating some of the dedicated tests (box.sql, circle.sql,\n> geometry.sql, lseg.sql, path.sql, polygon.sql), but i'm not sure where\n> testing the rtree indexes should go. i think other index types are\n> tested in select.sql, but i'd probably put them in geometry.sql. does\n> anybody care? is there someone that oversees the methods and\n> organization of the regression tests or do people just throw in new\n> tests when there's something new?\n\nWell, Marc and I had reorganized the regression tests a couple of\nyears ago, and most of the organizational changes since then have been\ndone by us too (Marc handling the platform-specific stuff, and I the\ntests themselves). But new test areas have been added by others, and\nwe certainly could use more contributions to existing tests,\nreorganizing them if that seems advisable.\n\nI agree with your suggestion to put rtree testing in geometry.sql, at\nleast until the size of the new tests would suggest separating it into\na new \"rtree.sql\" test.\n\nGo ahead and do something. We'll apply it to the tree, and if there is\nsomething which provokes someone else into modifying it, we'll do it\nthen. But I'm sure whatever you do will be fine, since you have\nclearly already given some thought to it.\n\nThanks...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 22 Jun 1999 15:18:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] has anybody else used r-tree indexes in 6.5?"
}
] |
[
{
"msg_contents": "Hi [email protected],\n\nI am writing to you in relation to a project that I am working on -\nto build a geographical map of the Internet, and I was wondering if you\nmight help me by entering the name of your nearest city on my web page at:\n http://www.ezymail.com/~s98212752/geolocate/default.htm My web page then maps your virtual location to your geographical location, there by generating the data that I need for my project. \n\nI want to assure you that I do NOT record e-mail addresses, (or URLs for that matter) in my world map, so by helping me with this project, you are definitely NOT going to be opening yourself up to more unsolicited mail. My site doesn't even ask for your e-mail address, it only asks for your nearest city. If you would like to know about how I feel about protecting your privacy then please checkout my frequently asked questions at http://www.ezymail.com/~s98212752/geolocate/faq.htm#Answers\nIt would help my project along greatly if you could spare a moment to enter the name of your nearest city, but if not, then thanks anyway for considering my request.\n\nI don't want to waste your time any further, but if you would like to know how I came to get your e-mail address, and why I am asking 'you', then here is a brief explanation. \n\nI got your mail address from http://postgres.home.ml.org, or if not on that page, it was on a page who's link appeared on that page. I was trying to find e-mail addresses that had been around for a while, and so I got hold of a list of URLs that were a few years old, and then I set a robot up to find any e-mail addresses that might be associated with those pages. Yours - [email protected] was picked up by the robot, but I am not sure if it was on http://postgres.home.ml.org, or whether it was just on a page that appeared as a link on that page. (The idea was to find people like yourself, that had been around the net for a while, and would not be terrified by the thought of someone knowing what their nearest city was.)\n\nI hope that you don't mind me asking, but as you can imagine, to build a reasonably good map of the net, I need an awful lot of people to tell me 'where' their part of the net is.\n\nOh, and if you would like to know what I plan to do with the data, well I can see all kinds of uses for it, from analysing web traffic to geographically targeted web advertising. I would also be very interested in hearing from anyone who may be interested in assisting, investing, or otherwise in the development of any of these possible uses. Even if you would just like to use the data yourself, then I would like to hear from you.\n\nAnyway, I have taken up enough of your time, but once again, if you would like to help me build a map of the Internet, then all that you have to do is just enter the name of your nearest city in the single text box at \nhttp://www.ezymail.com/~s98212752/geolocate/default.htm \nThanks heaps for your help in advance, and even if you don't help, then\nthanks anyway for considering my request.\n\nKind Regards\nAdrian McElligott\[email protected]\n\n\n",
"msg_date": "Sat, 19 Jun 1999 09:09:19 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Where in the world is [email protected]??"
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Do you want to be making these commits now? We haven't split the tree,\n> so they will appear in 6.5.x.\n\nYeah, I know. I'm in bugfix mode: CASE does not work in any context\ninvolving GROUP BY or aggregates, eg\n\tselect coalesce(f1,0) from int4_tbl group by f1;\n\tERROR: Illegal use of aggregates or non-group column in target list\nSince we now support CASE \"officially\", I think this is important\nenough to fix in 6.5.*. The cause is that parse_agg.c's routines\nneglected the CaseExpr nodetype case. I couldn't quite stomach\nadding the same boilerplate code to yet another place, so I decided\nto start implementing the suggestion I made a while ago to create\ncentralized tree-recursion logic.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Jun 1999 00:16:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commits "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Do you want to be making these commits now? We haven't split the tree,\n> > so they will appear in 6.5.x.\n> \n> Yeah, I know. I'm in bugfix mode: CASE does not work in any context\n> involving GROUP BY or aggregates, eg\n> \tselect coalesce(f1,0) from int4_tbl group by f1;\n> \tERROR: Illegal use of aggregates or non-group column in target list\n> Since we now support CASE \"officially\", I think this is important\n> enough to fix in 6.5.*. The cause is that parse_agg.c's routines\n> neglected the CaseExpr nodetype case. I couldn't quite stomach\n> adding the same boilerplate code to yet another place, so I decided\n> to start implementing the suggestion I made a while ago to create\n> centralized tree-recursion logic.\n\nOh, OK. Just checking. People sometimes forget. Thomas was _really_\nconfused.\n\nI certainly would like to see that stuff centralized.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 00:19:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commits"
},
{
"msg_contents": "> Thomas was _really_ confused.\n\nMaybe. But how did you know? And about what??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Jun 1999 04:50:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: commits"
},
{
"msg_contents": "> > Thomas was _really_ confused.\n> \n> Maybe. But how did you know? And about what??\n\nI thought you were confused about the optimizer geometric fixes Tom\nmade. I shouldn't have assumed you were confused. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 00:57:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: commits"
}
] |
[
{
"msg_contents": "Hi,\n\nOn April 26, 1999 I saw a posting in the hackers archive where Hiroshi\nInoue had a patch to solve the BTP_CHAIN flag was expected problem. This\nproblem is popping up about once per week for me and I really wanted to\ninstall this patch.\n\nIs this patch safe to install on a production database :) or did it have\nproblems associated with it? The post said it works with both 6.4 and with\nthe new 6.5 (I am using 6.4.2) so I'd really like to use it but if it\ncauses more problems than it solves I didn't want to install it.\n\nAnyone got any ideas on whether I should use it? I can't cause the problem\nto happen during testing so I'm short of options :)\n\nAlso, the only way I've found to get around the problem is to SELECT INTO\nanother table, drop the problem table, then rename the new one back, and\nrecreate all my indices and triggers, which can be a real hassle on a\ntable with a million rows in it :( Is there an easier way to fix this\nproblem up?\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 19 Jun 1999 16:48:10 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:, A, patch, for, FATAL, 1:btree, BTP_CHAIN, flag, was, expected"
}
] |
[
{
"msg_contents": "Hi,\n\nJust thought I'd drop an email again - I was the one having neverending\ntrouble with 6.4.2 jamming with backends waiting, and other types of\nproblems.\n\nAlthough I'm still using 6.4.2, I hope this will still be helpful for\nthe developers in case it impacts on things in 6.5.\n\nWe installed Tom Lanes shared memory patches, which I emailed about\nearlier, and they helped a bit, but unfortunately, we still get backends\nstuck waiting even today....\n\nThe interesting thing is, we went and put in another 128 mb of ram (from\n256 to 384 now) and recompiled the kernel with more semaphores and shared\nmemory, and the improvement was incredible! Before, we would get semget\nfailures every so often when we had about 50 backends going, causing the\nwhole thing to fall over, but now we get\n\"fmgr_info: function 111257088: cache lookup failed\"\nafter 64 backends (which is what we compiled postgres for) which I\nassume isn't so fatal and the whole system keeps running.\n\nFor three days after our little upgrade, the whole thing ran smoothly,\nthen we ran into the problem of the stuck waiting backends. We thought the\nproblem was gone but it was still there. So what would happen is a backend\nwould get stuck, cause others to get stuck, and the postgres' would just\nbuild up until it hit 64, then we'd have to kill them off and would be ok\nagain. At least now the number of problems have decreased slightly.\n\nOne interesting message we got during this problem was:\nNOTICE: LockRelease: locktable lookup failed, no lock \n\nIt seems as though the backends are waiting for a lock that got deleted\naccidentally, although I have no idea how the code works so can't offer\nany advice where.\n\nLately though, the problems are happening with higher frequency, and every\nso often we still get the BTP_CHAIN problems with tables (which I sent\nanother email about fixing) so I need to fix this.\n\n\nOne thing I was disappointed with was after adding an extra 128 mb of ram,\nI was hoping that this would be used for disk caching, but when performing\nrepeated select queries on tables, where I did something like:\n\nselect sum(some_value) from some_table;\n\nThe result took the same amount of time to run each time, and was not\ncached at all (the table was about 100 mb) and when doing the query, our\nraid controller would just light up which I wanted to avoid. After seeing\nthis, I read posts on the hackers list where people were talking about\nfsync'ing the pg_log to note down whether things had been commited or not.\n\nThe table I was testing was totally read only, no modifications being\nmade, however, another table gets almost continuous changes 24 hours per\nday, more than 1 per second, so would this be causing the machine to\ncontinuously flush pg_log to disk and cause my read-only tables to still\nnot be cached?\n\nI guess my next question is, can i comment out the fsync call? <grin> With\nthe disks performing more efficient updates, the whole thing would run\nfaster and run less risks of crashing. Currently, the performance can be\nquite bad sometimes when the machine is doing lots of disk activity,\nbecause even the simplest read only queries block because they aren't\ncached.\n\nWould moving pg_log to a 2nd disk make a difference? Are there other\nimportant files like pg_log which should go onto separate disks as well? I\nhave no problem with multiple disks, but it was only recently that I\ndiscovered this fsyncing thing on pg_log. Is pg_log more speed and fsync\ncritical than the actual data itself? I have two raid controllers, a slow\nand a fast one, and I want to move pg_log to one of them, but not sure\nwhich one.\n\n\nSo in summary, I've learned that if you are having troubles, put in more\nmemory, (even if you have some free) and increase your kernels internal\nsizes for semaphores and shared memory values to really large values, even\nwhen postgres isn't complaining. It makes a difference for some reason\nand everything was a lot happier.\n\nBTP_CHAIN and the backends waiting problem are still occuring, although I\ncannot build a test case for either of them, they are very much problems\nwhich occur accidentally and at random times.\n\n\nthanks again,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 19 Jun 1999 17:09:08 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update on my 6.4.2 progress"
},
{
"msg_contents": "Wayne Piekarski wrote:\n> \n> I guess my next question is, can i comment out the fsync call?\n\nif you ar confident in your os and hardware, you can \npass the -F flag to backend and no fsyncs are done.\n\n(add -o '-F' to postmaster startup line)\n\nI think it is in some faq too.\n\n--------------\nHannu\n",
"msg_date": "Sat, 19 Jun 1999 13:17:35 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "> The interesting thing is, we went and put in another 128 mb of ram (from\n> 256 to 384 now) and recompiled the kernel with more semaphores and shared\n> memory, and the improvement was incredible! Before, we would get semget\n> failures every so often when we had about 50 backends going, causing the\n> whole thing to fall over, but now we get\n> \"fmgr_info: function 111257088: cache lookup failed\"\n> after 64 backends (which is what we compiled postgres for) which I\n> assume isn't so fatal and the whole system keeps running.\n\nThe 6.4.2 code would not allocate all shared memory/semaphores at\nstartup, and only fail when you go to a large number of backends. 6.5\nfixes this by allocating it all on startup.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 08:03:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> The 6.4.2 code would not allocate all shared memory/semaphores at\n> startup, and only fail when you go to a large number of backends. 6.5\n> fixes this by allocating it all on startup.\n\nAlso, I don't think 6.4.* actually tested for an attempt to start one\ntoo many backends; it'd just do it and eventually you'd get a failure\ndownstream somewhere. (A failure *will* happen, because there are\nfixed-size arrays containing per-backend entries, but I think the code\nfailed to notice ...)\n\nThere is now code in the postmaster that prevents starting that fatal\n65th (or whatever) backend. If you want to keep running 6.4.2 you\nshould consider adopting CountChildren() and the code that calls it\nfrom 6.5's src/backend/postmaster/postmaster.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Jun 1999 14:17:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress "
},
{
"msg_contents": "> Wayne Piekarski wrote:\n> > \n> > I guess my next question is, can i comment out the fsync call?\n> \n> if you ar confident in your os and hardware, you can \n> pass the -F flag to backend and no fsyncs are done.\n> \n> (add -o '-F' to postmaster startup line)\n> \n> I think it is in some faq too.\n\nI already have the -o -F switch in the startup file (which I believe is\nworking) but I'm under the impression from what I read that there are two\nfsync's - one you can switch off, and one which is fixed into the code\nand possibly can't be removed?\n\nRegards,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sun, 20 Jun 1999 21:48:44 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> I already have the -o -F switch in the startup file (which I believe is\n> working) but I'm under the impression from what I read that there are two\n> fsync's - one you can switch off, and one which is fixed into the code\n> and possibly can't be removed?\n\nNo. I've looked.\n\nActually there is an un-disablable fsync() on the error file in elog.c,\nbut it's not invoked under ordinary scenarios as far as I can tell,\nand it shouldn't be a performance bottleneck anyway. *All* the ordinary\nuses of fsync go through pg_fsync.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jun 1999 19:22:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress "
},
{
"msg_contents": "> > The interesting thing is, we went and put in another 128 mb of ram (from\n> > 256 to 384 now) and recompiled the kernel with more semaphores and shared\n> > memory, and the improvement was incredible! Before, we would get semget\n> > failures every so often when we had about 50 backends going, causing the\n> > whole thing to fall over, but now we get\n> > \"fmgr_info: function 111257088: cache lookup failed\"\n> > after 64 backends (which is what we compiled postgres for) which I\n> > assume isn't so fatal and the whole system keeps running.\n> \n> The 6.4.2 code would not allocate all shared memory/semaphores at\n> startup, and only fail when you go to a large number of backends. 6.5\n> fixes this by allocating it all on startup.\n\nOk, thats cool ... One question though: is the cache lookup failed message\nreally bad or is it a cryptic way of saying that the connection is refused\nbut everything else is cool? I have no problem with the fact that the\nconnection failed, but does it cause corruption or postgres to fall over\nlater on? Ie, if you get a semget failure, shortly after the whole thing\nwill die, possibly causing data corruption or something. Would these kind\nof errors cause BTP_CHAIN errors, or is that totally unrelated?\n\nAs another general question, if I randomly kill postgres backends during\nthe middle of transactions, is there a possibility for corruption, or is\nit safe due to the way transactions are commited, etc. I've always been\nvery nervous when it comes to killing backends, as I was worried something\nmight go wrong, leaving something out of sync.\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Mon, 21 Jun 1999 18:19:21 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "> Wayne Piekarski <[email protected]> writes:\n> > I already have the -o -F switch in the startup file (which I believe is\n> > working) but I'm under the impression from what I read that there are two\n> > fsync's - one you can switch off, and one which is fixed into the code\n> > and possibly can't be removed?\n> \n> No. I've looked.\n> \n> Actually there is an un-disablable fsync() on the error file in elog.c,\n> but it's not invoked under ordinary scenarios as far as I can tell,\n> and it shouldn't be a performance bottleneck anyway. *All* the ordinary\n> uses of fsync go through pg_fsync.\n\nI had a dig through the source code yesterday and witnessed the same thing\nas well, each call is controlled with -F. However, I did mess up when I\nwrote my previous email though, because I don't have -F enabled right now,\nso I am running with the fsync() turned on, which makes sense and explains\nwhat is happening with the cache. \n\nAfter reading the mailing list I was under the impression this fsyncing\nwas different from the one controlled by -F.\n\nI am going to be taking it for a test tonight with -F enabled to observe\nhow much better the performance is. Hopefully it will cache better as a\nresult of this, I guess I'll have to run it like this from now on.\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Mon, 21 Jun 1999 18:30:48 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "> > > I guess my next question is, can i comment out the fsync call?\n> > \n> > if you ar confident in your os and hardware, you can \n> > pass the -F flag to backend and no fsyncs are done.\n> > \n> > (add -o '-F' to postmaster startup line)\n> > \n> > I think it is in some faq too.\n> \n> I already have the -o -F switch in the startup file (which I believe is\n> working) but I'm under the impression from what I read that there are two\n> fsync's - one you can switch off, and one which is fixed into the code\n> and possibly can't be removed?\n\nEeeep! When I wrote the above, I was mistaken. My config file did not have\n-o -F, which was why the fsync's were occuring. Sorry for messing you\naround here .... \n\nWhat I was concerned about was the lack of caching and thrashing, but I\nguess I can solve that with no fsync.\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Mon, 21 Jun 1999 18:41:42 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n>>>> whole thing to fall over, but now we get\n>>>> \"fmgr_info: function 111257088: cache lookup failed\"\n>>>> after 64 backends (which is what we compiled postgres for) which I\n>>>> assume isn't so fatal and the whole system keeps running.\n\n> ... One question though: is the cache lookup failed message\n> really bad or is it a cryptic way of saying that the connection is refused\n> but everything else is cool?\n\nI'd put it in the \"really bad\" category, mainly because I don't see the\ncause-and-effect chain. It is *not* anything to do with connection\nvalidation, that's for sure. My guess is that the additional backend\nhas connected and is trying to make queries, and that queries are now\nfailing for some resource-exhaustion kind of reason. But I don't know\nwhy that would tend to show up as an fmgr_info failure before anything\nelse. Do you use user-defined functions especially heavily in this\ndatabase? For that matter, does the OID reported by fmgr_info actually\ncorrespond to any row of pg_proc?\n\n> As another general question, if I randomly kill postgres backends during\n> the middle of transactions, is there a possibility for corruption, or is\n> it safe due to the way transactions are commited, etc.\n\nI'd regard it as very risky --- if that backend is in the middle of\nmodifying shared memory, you could leave shared memory datastructures\nand/or disk blocks in inconsistent states. You could probably get away\nwith it for a backend that was blocked waiting for a lock.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jun 1999 09:43:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress "
},
{
"msg_contents": "Hi,\n\n> Wayne Piekarski <[email protected]> writes:\n> >>>> whole thing to fall over, but now we get\n> >>>> \"fmgr_info: function 111257088: cache lookup failed\"\n> >>>> after 64 backends (which is what we compiled postgres for) which I\n> >>>> assume isn't so fatal and the whole system keeps running.\n>\n> failing for some resource-exhaustion kind of reason. But I don't know\n> why that would tend to show up as an fmgr_info failure before anything\n> else. Do you use user-defined functions especially heavily in this\n> database? For that matter, does the OID reported by fmgr_info actually\n> correspond to any row of pg_proc?\n\nI had a look, and there is no entry in pg_proc for any oid like the above\nmentioned. One thing that is very interesting is that we use a ton of user\ndefined function (in C, plpgsql, and SQL) like you asked and that we\nalso had this problem a while back:\n\nAt midnight, we have a process called the vacuum manager, which drops the\nindices on a table, vacuum's it, and then recreates the indices. During\nthis time, we suspend the processes which could possibly do work, so they\nsit there waiting for this lock file on disk to disappear, then they\nresume their work when the vacuum manager is finished.\n\nThe interesting part is, when this one process would resume, it would die\ninside a plpgsql function. It would crash the backend with a message like:\nExecOpenR: relation == NULL, heap_open failed\". I put some extra code to\nfind the oid value, but the oid didn't exist in pg_proc. I think somewhere\ninternally postgres had stored the oid of an index, and then barfed when\nit tried to use that index later on. \n\nTo avoid backends crashing, we reconnected when the lock file was removed,\nand this fixed the problem up. However, I don't know why this happened at\nall, it was really bizarre. The stranger part was that the query that died\nwould always be in a plpgsql function, why is that? My next question is,\nare user defined function bad in general, could they cause locking\nproblems, crashing, etc, which might explain some of the massive problems\nI'm having [Still got problems with BTP_CHAIN and backends waiting - 6.4.2]\n\n> > As another general question, if I randomly kill postgres backends during\n> > the middle of transactions, is there a possibility for corruption, or is\n> > it safe due to the way transactions are commited, etc.\n> \n> I'd regard it as very risky --- if that backend is in the middle of\n> modifying shared memory, you could leave shared memory datastructures\n> and/or disk blocks in inconsistent states. You could probably get away\n> with it for a backend that was blocked waiting for a lock.\n\nWell, technically when a backend crashes, it kills all the other backends\nas well so this should avoid the shared memory corruption problems right?\n\n****\n\nAlso, I'm still having troubles with this BTP_CHAIN stuff ... I think I've\nworked out how to reproduce it, but not enough to write a script for it.\n\nBasically, if I have lots of writers and readers doing small work and then\nsomeone comes along with a huge read or write (ie, join against a big\ntable and it takes ages) then all of a sudden queries will try to do an\nupdate and I get the BTP_CHAIN problem.\n\nApart from reloading the table, is there any way I can fix up the\nBTP_CHAIN problem an easier way? It takes ages to reload a 100 mb table :(\nVacuum fails with blowawayrelationbuffers = -2 (As re my previous email)\n\nThis BTP_CHAIN stuff is really bad, I can't make this stuff work reliably\nand it causes n-million problems for the people who need to use the dbms\nand the table is dead.\n\n****\n\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sat, 3 Jul 1999 17:21:39 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Update on my 6.4.2 progress"
}
] |
[
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> \n> \t* I've progess a bit with MyCancelKey and MyProcPort (postmaster \n> send all that information to the backend with a BeOS kernel port : \n> nothing is visible on the command line except the port number but datas \n> don't stays in the port more than a few milliseconds). All Shared \n> memory segment are also restored in the backend process. But now, I \n> have a crash in SpinAcquire(OidGenLockId). It seems that SLockArray is \n> not initialized. Do I need to send it to the backend ?\n\nYes, I think I made some changes where initialization was done only once\nin the postmaster, and inherited by every fork'ed backend.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 08:08:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] BeOS port"
},
{
"msg_contents": "\n\t* I've progess a bit with MyCancelKey and MyProcPort (postmaster \nsend all that information to the backend with a BeOS kernel port : \nnothing is visible on the command line except the port number but datas \ndon't stays in the port more than a few milliseconds). All Shared \nmemory segment are also restored in the backend process. But now, I \nhave a crash in SpinAcquire(OidGenLockId). It seems that SLockArray is \nnot initialized. Do I need to send it to the backend ?\n\n\t* BeOS provide a nice implementation of threads (But it's not posix \nthreads). It could be interesting to adapt postgres to works with \nthreads but there will be some work whith global variables (which will \nbe shared by all backends in this case), they should be transfered in \nsome kind of thread local storage. Is it somethong interesting to do ?\n\n\t* The Be guy try to improve there posix support but the case of the \nfork seem to cause some technical problems and the possible actions \nbetween a fork and an exec are pretty limited.\n\n\n\t\tcyril\n\n>> I've already tried to put the exec back. But then I hit a problem \nwith \n>> \"MyProcPort\" which is not initialised in the backend and make the \n>> backend crash. I've also found that \"MyCancelKey\" is set in \npostmaster. \n>> Are there any others ? \n>> \n>> Regarding the old code (6.3.2), there have been a lot of change in \n>> DoBackend/DoExec. I really need some expert advice on what to do.\n>> \n>\n\n>He's right though: there have been subsequent changes that depend on\n>not doing an exec(). Offhand I only recall MyCancelKey --- that is \nset\n>in the postmaster process just before fork(), and the backend simply\n>assumes that it's got the right value.\n>\n>The straightforward solution (invent another backend command line \nswitch\n>to pass the cancel key) would not be a very good idea, since that \nwould\n>expose the cancel key to prying eyes.\n>\n>If BeOS does not have the ability to support fork without exec, does \nit\n>have some other way of achieving the same result? Threads maybe?\n>(But Postgres is hardly the only common daemon that uses fork without\n>exec; sendmail comes to mind, for example. So it seems like the real\n>answer is to beat up the BeOS folks about fixing their inadequate Unix\n>support...)\n>\n>\t\t\tregards, tom lane\n>\n\n\n>I recommend you get anonymous cvs access(see cvs faq on web site) do a\n>log to show changes to postgres.c and postmaster.c, and you will find\n>the exec was removed in one or two big patches. Then do a cvs diff \nand\n>see the changes made, and try and merge them into the current code \nwith\n>ifdef's.\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n",
"msg_date": "Sat, 19 Jun 1999 13:32:26 CEST",
"msg_from": "\"Cyril VELTER\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BeOS port"
}
] |
[
{
"msg_contents": "I'm having major problems, cannot get it to build.\n\n2 errors:\n\n-in src/backend/commands/copy.c I get complaints about macro _ALIGN\n-a lot of \" no rule to make target 'buffer/SUBSYS.o'. Stop\"\n\n\n\n",
"msg_date": "Sun, 20 Jun 1999 02:11:28 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": true,
"msg_subject": "anyone build postgres 6.5 ( or 6.4 ) on IRIX 6.3 lately?"
},
{
"msg_contents": "> I'm having major problems, cannot get it to build.\n> \n> 2 errors:\n> \n> -in src/backend/commands/copy.c I get complaints about macro _ALIGN\n> -a lot of \" no rule to make target 'buffer/SUBSYS.o'. Stop\"\n\nI have renamed _ALIGN to TYPEALIGN in the 6.5.x sources. I recommend\nyou do the same and recompile. When 6.5.x is released, it will be in there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 22:35:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] anyone build postgres 6.5 ( or 6.4 ) on IRIX 6.3\n lately?"
},
{
"msg_contents": "At 22:35 19-6-99 -0400, Bruce Momjian wrote:\n>I have renamed _ALIGN to TYPEALIGN in the 6.5.x sources. I recommend\n>you do the same and recompile. When 6.5.x is released, it will be in there.\n\n\ngot snapshot, didn't build, same SUBSYS.o trouble though the ALIGN error\nwas gone, and I could not get the patches from FAQ_IRIX installed, never\nseen such weird patch files\n\nuhm\n\nthe machine has gcc 2.8.1 installed\n\nI tried it with the SGI cc, option -n32 to cc and ld, and no CXX\ndidn't build\n\nthen, to complete my journey and to show that really nothing would get it\nto build:\ngot rid of -n32 for both cc and ld\n\nit builds\nlots of warnings, regression tests won't even start(something with plpgsql)\nbut it SEEMS to run fine\n\n\n",
"msg_date": "Tue, 22 Jun 1999 02:01:05 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] anyone build postgres 6.5 ( or 6.4 ) on IRIX 6.3\n lately?"
}
] |
[
{
"msg_contents": "Why don't we search in /usr/local/include and /usr/local/lib for\nlibreadline.a by default, and require the flags to configure?\n\nSeems we should have those directories searched by default in configure.\nMost software does this with configure already.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Jun 1999 23:19:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "/usr/local/include search"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Why don't we search in /usr/local/include and /usr/local/lib for\n> libreadline.a by default, and require the flags to configure?\n\nI'm pretty leery of having configure make unsupported assumptions\nabout the layout of my filesystem. Not everyone keeps this sort\nof thing in /usr/local, and it's *not* necessarily harmless to look\nthere without being told. For example, /usr/local might contain\nlibraries that are incompatible with the compiler you're trying to\nuse --- I actually had that problem a few weeks ago when I was\nexperimenting with egcs.\n\nI think it's fine that configure defaults to looking only in whatever\ndirectories the compiler searches automatically. gcc, for one, is\nusually configured to search in /usr/local by default, so the whole\nissue is moot for anyone using gcc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jun 1999 11:00:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] /usr/local/include search "
}
] |
[
{
"msg_contents": "I posted this to Usenet today in a discussion about GPL vs. BSDL. I\nhope this doesn't start a huge discussion, but I thought the issues were\nsignificant enough to address to this group.\n\n---------------------------------------------------------------------------\n",
"msg_date": "Sun, 20 Jun 1999 01:15:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "BSD vs. GPL"
},
{
"msg_contents": "On Sun, Jun 20, 1999 at 01:15:06AM -0400, Bruce Momjian wrote:\n> Consider Redhad, Caldera, etc. They are adding value \"on top of\" the\n> OS, but the kernel is pretty much the same for all of them. In fact,\n> aside from some tweaks, they really aren't involved in enhancing the\n> lower levels of Linux, and economically, they really can't. They could\n\nI beg to disagree. RedHat for instance pays quite some people for working on\nGNOME. All of GNOME's software is GPLed and still it seems to make sense for\nDebian. Or how about Coral that works on a GPLes installation procedure for\nDebian?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sun, 20 Jun 1999 11:21:44 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Here are some short examples. I have a Viewsonic 15\" digital flat panel\n> monitor with ATI XpertLCD card. Xig has a commercial X server that\n> drives it. XFree86 doesn't support it. The cost of the X server is\n> worth it, because without it, I would be forced to us another display\n> device.\n\nThis is mainly the result of ATI not giving out the specs (for whatever \nreason - possibly an agreement with Xig ;-p )\n\n> The cost of BSDI is well worth it for me, because of the high\n> reliability and performance of the OS is well worth the cost. Free\n> software is nice, but for me, the cost of commercial software is a\n> bargain considering the benefits it provides.\n\nThe fact of being commercial does not automatically make software \nhigh reliability and performance.\n\nJust imagine a scenario where the current PostgreSQL development team \nhad a bunch of marketing/management guys who had made commitments based \non our initial release date estimates. I bet that the release would have \nbeen still a \"little\" late but without most of the enchancements and \nmuch more buggy. \n\n> (This doesn't mean I don't support open software. I am a PostgreSQL\n> developer.)\n> \n> Who do you want to write your heart monitor software?\n\nSomeone with deep pockets and tight schedules of course, so my relatives \ncould sue them afterwards ;)\n\n-------------\nHannu\n",
"msg_date": "Sun, 20 Jun 1999 15:27:55 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Consider Redhad, Caldera, etc. They are adding value \"on top of\"\n> the OS, but the kernel is pretty much the same for all of them. In\n> fact, aside from some tweaks, they really aren't involved in\n> enhancing the lower levels of Linux, and economically, they really\n> can't. They could put 100 programmers on it, but once they do a\n> release, all their competitors have all their enhancements, and the\n> economic benefit of those 100 programmers is gone. Sure, Linux is\n> better for it, but those 100 programmers aren't seeing an increased\n> sales rate to pay their salaries.\n\nBut Bruce, you're uninformed. Heck, you're dead wrong. The two\norganizations you name, and more besides do work on the Linux kernel.\nA lot. I realize that you're not deeply connected in the Linux\ncommunity, so you may not realize much of this, but the simple fact is\nthat RedHat and others do exactly what you say they don't.\n\nCaldera has contributed significantly to both the PPP code and IPX\ncode in the Linux kernel. They've developed a SYSV Streams emulation\n(that Linus doesn't want in the main kernel :-), and some other stuff.\n\nRedHat employs Doug Ledford who works on (and has put a *lot* or work\ninto) the Adaptec 7XXX driver. They employ Dave Miller who works on\nboth multi-arch issues and oversees (and codes a fair portion) of the\nTCP networking. They (through his consulting firm) employ Alan Cox,\nwho is often regarded as Linus' right-hand man, and was responsible\nfor seeing the 2.0.36 and 2.0.37 stable kernels to release, plus\nwhatever other scut jobs are out there. I believe RH also employs\nStephen C. Tweedie, who does major work on the ext2 fs, including\nadding journaling.\n\nIn fact, one could argue that if the people RedHat pays to work on the\nkernel disappeared, work on the kernel would suddenly get an awful lot\nslower.\n\nSUSE employs Andrea Arcangeli, who is doing a ton of work on the Linux\nVM system. SUSE has also developed X servers which they then\ncontrib'd back to XFree86.org, which arguably benefits even more\npeople since XF86 works on the *BSDs (including BSD/OS, no?) (and\nwhich, since XF86 is under the MIT license, someone could then take\nand make proprietary...fair?).\n\nSo, in light of these new facts, would you like to reassess your\nassessment?\n\nMike.\n",
"msg_date": "20 Jun 1999 10:40:20 -0400",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Consider Redhad, Caldera, etc. They are adding value \"on top of\"\n> the OS, but the kernel is pretty much the same for all of them. In\n> fact, aside from some tweaks, they really aren't involved in\n> enhancing the lower levels of Linux, and economically, they really can't. .\n\nTo which Michael Meskes responded:\n> I beg to disagree. RedHat for instance pays quite some people for working on\n> GNOME. All of GNOME's software is GPLed and still it seems to make sense for\n> Debian. Or how about Coral that works on a GPLes installation procedure for\n> Debian?\n\nAnd to which Michael Alan Dorman wrote:\n> But Bruce, you're uninformed. Heck, you're dead wrong. The two\n> organizations you name, and more besides do work on the Linux kernel.\n> A lot. I realize that you're not deeply connected in the Linux\n> community, so you may not realize much of this, but the simple fact is\n> that RedHat and others do exactly what you say they don't.\n> \n> Caldera has contributed significantly to both the PPP code and IPX\n> code in the Linux kernel. They've developed a SYSV Streams emulation\n> (that Linus doesn't want in the main kernel :-), and some other stuff.\n> \n> RedHat employs Doug Ledford who works on (and has put a *lot* or work\n> into) the Adaptec 7XXX driver. They employ Dave Miller who works on\n> both multi-arch issues and oversees (and codes a fair portion) of the\n> TCP networking. They (through his consulting firm) employ Alan Cox,\n> who is often regarded as Linus' right-hand man, and was responsible\n> for seeing the 2.0.36 and 2.0.37 stable kernels to release, plus\n> whatever other scut jobs are out there. I believe RH also employs\n> Stephen C. Tweedie, who does major work on the ext2 fs, including\n> adding journaling.\n> \n> In fact, one could argue that if the people RedHat pays to work on the\n> kernel disappeared, work on the kernel would suddenly get an awful lot\n> slower.\n> \n> SUSE employs Andrea Arcangeli, who is doing a ton of work on the Linux\n> VM system. SUSE has also developed X servers which they then\n> contrib'd back to XFree86.org, which arguably benefits even more\n> people since XF86 works on the *BSDs (including BSD/OS, no?) (and\n> which, since XF86 is under the MIT license, someone could then take\n> and make proprietary...fair?).\n> \n> So, in light of these new facts, would you like to reassess your\n> assessment?\n\nRed Hat is in the business of establishing a corporate trademark \nand becoming the \"standard\" Linux so that it can establish a monopoly. \nTo this end, they will spend some serious doe, but only on improvements\nand fixes that directly affect the ability of the distribution to ship\nto a client, thus, we have RPM, device drivers, and GNOME.\nHowever, even with this notable effort, I would like to know\nwhat % of revenue Red Hat plans to spend on open source development...\n\nI doubt that it is anything \"significant\", and if it is, I would\ncall Red Hat's situation exceptional. They have a near monopoly on\ncorporate/consumer distributions, and their $80 price tag is the \nproof. Do you think after the near monopoly becomes a full monopoly\nthat this % of revenue will increase or decrease? I'd bet\non the latter. The Microsoft pattern, albeit a much less powerful\nstrain, is about to re-occur. What good is a bunch of software if \nit can't be named? It isn't. In the software world, a trademark \nis a name for a standard. And RedHat is about to own it. \n\nThus, although you you have found some noteable exceptions to \nBruce's comments, the general thrust of his argument still \nholds -- if the software distribution market was competitive, \ncompanies like RedHat, etc., could not afford to fund open \nsource development. \n\nHowever, RedHat's business is NOT distribution, it is\nstandardization. And this allows them to spend money\non open source _if_ it is in their best interest. I would\nargue that it is in their best interest now, but it\nwon't be in a few years after they have a fimly \nestablished monopoly. \n\nThus, your exceptions point to a deeper problem with\nopen source, rather than positive support for it.\n\nBest Wishes,\n\nClark Evans\n\nP.S. It is slightly different (and a few months old)\nbut I wrote a possible alternative at http:\\\\distributedcopyright.org \nIt would be cool to have your feedback.\n",
"msg_date": "Sun, 20 Jun 1999 22:14:22 -0400",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "May I humbly suggest that the PostgreSQL developers list is an inopportune\nplace to discuss licensing? If I recall correctly, there newsgroups\nfor this sort of thing, complete with people who want to listen.\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n \"There is no spoon.\"\n\n",
"msg_date": "Sun, 20 Jun 1999 22:55:47 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "At 01:15 20/06/99 -0400, Bruce Momjian wrote:\n>\n>I have followed this discussion, and while there is a lot of it, there\n>aren't many specific examples. Let me suggest one: BSDI\n>\n>They took the BSD4.4(386/BSD) code, hired many of the departing BSD\n>folks, and developed BSD/OS based on it. Now, if the BSD code was GPL,\n>would those people have started a company...\n>\n...\n>... Only \"Source licenses holders\" receive\n>OS/kernel source, which currently costs an additional $2k.)\n\nThis is one model for development. There have already been counterexamples to it being the only model. My personal experience as a consultant and developer suggests your model is probably not even the best. I have had occasion to use the support services of a number of the larger IT companies, and while the support can be very professional, it leaves a lot to be desired:\n\n1. No commercial company will say \"The optimizer code is stuffed, we're not sure how to fix it but we're working on it. Use a work-around\". Instead they will leave you in the dark, saying \"It has been passed on to engineering, and we'll let you know when it is fixed\" [the last part only sometimes]. For me it is far more useful to know that the bug is a real problem that may take a while to fix. Open source even gives me the *choice* to employ someone to fix the code (especially for those products that have commercial support), and I am happy that such fixes be made public.\n\n2. Access to the developers of commercial software is rare, and controlled even at the best of times. When you do get to talk to them at a conference, for example, they are totally unwilling to talk about futures. Contrast this to Linux or PgSQL. Future plans for a database are very important, and affect my choices in database design.\n\n3. Quality. There are many aspects to quality; the most important in my view are: does it work? when it doesn't work, is it easily fixed? OK, PgSQL is not as reliable or robust as Dec/Rdb, but I am not a 24-Hour shop with mission-critical applications running all the time, so I don't need Rdb. Experience with bugs in PgSQL and in other database products suggests that PgSQL bugs get fixed quicker. Frequently a new patch appears within a day of a bug report. Patches for known bugs can be downloaded from the mailing list immediately. Linux is another example; it is more reliable than NT, and again, the few bugs that I have reported have been fixed in days. Contrast this to Microsoft's support for NT! For the GPL products I use, I would say the oevrall quality is higher than commercial offerings.\n\n\n>Consider Redhad, Caldera, etc. They are adding value \"on top of\" the\n>OS, but the kernel is pretty much the same for all of them. In fact,\n>aside from some tweaks, they really aren't involved in enhancing the\n>lower levels of Linux, and economically, they really can't. They could\n>put 100 programmers on it, but once they do a release, all their\n>competitors have all their enhancements, and the economic benefit of\n>those 100 programmers is gone.\n\nOthers have already pointed out that your facts are wrong here, and so too is the philosophical point:\n\nFor an existing *large* GPL project, any additional code developed by 100 programmers will require 100 programmers to maintain and enhance. If the original programmers are all fired, then the product becomes unsupportable and worthless - you would be very unwise to buy it. \n\nFurthermore, since most of the money for GPL'd products comes from support and ancillary sales (eg. commercial products based on the s/w in question), for anyone to become competitive with the original developer they would require a substantial investment up front (to understand the code), and a continuing investment in development and support. \n\nRather than compete they would be better off enhancing some other part of the software and thereby developing their own niche. All parties can then use the improved software. This approach makes the product stronger, so increaes market share, and solidifies the basis of the two companies.\n\n>Sure, Linux is better for it, but those\n>100 programmers aren't seeing an increased sales rate to pay their\n>salaries.\n\nThis is true; many 'volunteer' programmers do not see fair monetary recompense in the short term. But the small amounts of unpaid work I have done has been a good learning experiance, enjoyable, interesting, and made me feel good about the work I do. This is at least *some* compensation, ignoring for the moment the competitive edge such experience gives me in the market place. Combine this with the fact that many commercial companies see the value in paying their employees to work on GPL code, and I think you will find that 'those 100 programmers *are* seeing an increase *in their remuneration*. In some cases they may only have jobs because of it.\n\n>So, the GPL vs. BSDL issue really boils down to whether a particular\n>piece of software is going to need a commercial organization to\n>improve/enhance it in the future. \n\nNo, it does not. It boils down to whether or not the internet community is large enough to continue to produce high quality voluntary contributions to projects. There are always people and companies who are unwilling to work for anything other than hard cash, especially in the current economic millieu, but GPL will work as long as developers see value in GPL.\n\nI am lucky: I get paid reasonably well for the work I do, and have a lot of work (at least, at the moment!). This means I have the luxury to be able to contribute my time (in small ways) to selected voluntary 'clients'. These are generally organizations that have little or no money to spare, and could not afford a programmer under any circumstances. I could not help them if products like PgSQL and Linux were not available and of such high quality. And I strongly believe that they would not be of such high quality if the source was not open.\n\n>If it does require a\n>commercial team that can put man-years into the project and needs to\n>recover the costs of doing that, GPL will prevent that from happening,\n>and a commercial entity will have to start from scratch in developing\n>the code so they can \"own the code.\"\n\nThis is clearly not true, as has been argued elsewhere.\n\n\n>The answer to that question also suggests the question of whether\n>non-paid developers are the future for \"all\" software. \n\nEven I don't suggest it's the way for all software. Our current social and legal structures pretty much require that someone take responsibility and due care for some things (eg. heart monitoring software). The best way to show 'due care' is to buy commercial software from a reputable company. The *best* software may still be freely available on the internet, but the safest software (ie. readily available for legal action), will always be commercial.\n\n>If it is, then\n>GPL is the way to go. If it is not, then GPL use needs to be decided\n>carefully depending on the perceived need for later commercialization of\n>the code.\n\nNo, it depends on the perceived niche for the code; if I come up with a hardware/software-neural-database-thingy, then I'm NOT going to make the software open source - it would disclose commercial secrets. Similarly, if I am the sole developer of a complete, high quality, working product, then I am *inclined* to keep it commercial - but ONLY if I plan to try to sell it or market it. If I do neither of those things, then it should be made public.\n\nIf, on the other hand, I develop something useful, but not world-breaking, that may still need work, then I will release it into the (internet) world, and hope some other person:\n\n1. Finds it useful and saves them time, so they can do other GPL work.\n2. Likes it and enhances it (thereby saving me time).\n\nIf enough people find such code useful, it may eventually become a 'PgSQL-scale' project, and I'll be very happy.\n\n>Commercialization of code is not a bad thing. \n\nBut you need a pretty good reason to do it!\n\n\n>Fortunately for Linux, there are enough non-paid programmers working on\n>it that GPL is not a problem.\n\nNot to mention the paid programmers...\n\n\n>Maybe all software will some day have\n>enough non-paid programmers so commercial software organizations with\n>teams of paid programmers are no longer required. Maybe not.\n\nI get paid to write software for people. The nature of my work means that unless otherwise specified, I own the copyright. I get a lot of work because my clients are happy, and I work fast. I work fast because I reuse my own code (I will never GPL my own software libraries!). If more commercial organizations pooled their code (eg. via GPL) then I would be out of a job, and they would save a great deal of money. This won't happen because most organization have mistaken beliefs about their 'competitive edge', even when it relates to non-core business.\n\nTeams of programmers will always be employed either to support and enhance software (legacy code, or GPL'd stuff), or to develop 'proprietry' code (neural-database-thingy), for projects that are too specialised to warrant general interest (electronic fuel injection systems), or for mission-critical code (heart monitoring machines). Only the first in this list is GPL'd, but it will be the largest category.\n\nWe are the factory workers of the new millennium - as such, over time we will probably face more 'piece-work', than new development work, but we will have work.\n\n>Here are some short examples. I have a Viewsonic 15\" digital flat panel\n>monitor with ATI XpertLCD card. Xig has a commercial X server that\n>drives it. XFree86 doesn't support it. \n\nUsually there is a reason for this. XFree used not to support Diamond Stealth cards, until Diamond made some data public (they presumably believed the data gave them a 'competitive edge'). By then I had bought another card.\n\n>The cost of the X server is\n>worth it, because without it, I would be forced to us another display\n>device.\n\nIf the X-server was $1000, you would have bought another monitor and say 'the price of the monitor was worth it, because without it I could not run Linux'. Everything is relative; the best solution was that you had to pay no money, and XFree supported the monitor. Xig may even have made more sales into the Linux market if they made their X Server GPL. I presume they are a hardware manufacturer? Most people give away their drivers for a very good reason...\n\n>The cost of BSDI is well worth it for me, because of the high\n>reliability and performance of the OS is well worth the cost. \n\nI could say the same about Linux. My linux box is substantially more reliable than my NT box. You could at this point say \"Yes, but NT is Microsoft\", but my point is that Microsoft is probably the embodiment of the anti-GPL philosophy.\n\n>Free\n>software is nice, but for me, the cost of commercial software is a\n>bargain considering the benefits it provides. (This doesn't mean I\n>don't support open software. I am a PostgreSQL developer.)\n\nFor me the support, flexibility, and reliability of the GPL software I use is substantially superior to the commercial offerings.\n\n>\n>Who do you want to write your heart monitor software?\n>\n\nAs somebody else said, \"someone my family can sue\". But in a more serious sense, heart monitoring software is probably a small niche that *does* require 'due care' be taken. It will be commercial for a long time to come. Personally I would prefer it be released under a GPL, then 1000's of programmers with heart problems will find the bugs before they kill someone...\n\n\nAt 22:14 20/06/99 -0400, Clark Evans wrote:\n\n>Thus, although you you have found some noteable exceptions to \n>Bruce's comments, the general thrust of his argument still \n>holds -- if the software distribution market was competitive, \n>companies like RedHat, etc., could not afford to fund open \n>source development. \n\nI do not agree with this. Red Hat is *not* competing on the basis that it's source is better. The fact that Red Hat is a 'standard' Linux distribution is crucial to it's sales. What the likes of Red Hat use to define themselves is how they package the software (no disrepsect meant): the quality of RPM, the SUPPORT they provide, and the fact that everything is open. If 'Blue Hat' came along and wanted to compete, they would not try to out-code Red Hat, they would try to provide better installation, support and distribution mechanisms. \n\n>However, RedHat's business is NOT distribution, it is\n>standardization. And this allows them to spend money\n>on open source _if_ it is in their best interest. I would\n>argue that it is in their best interest now, but it\n>won't be in a few years after they have a fimly \n>established monopoly. \n\nIn fact the *only* way Red Hat can become a monopoly is by 'owning' the code. They may dominate, by force of numbers of developers, but so long as the GPL applies, they only own the good will they generate and the distribution and support business they establish.\n\nThis does not seem too unreasonable, but maybe I'm naieve.\n\n>Thus, your exceptions point to a deeper problem with\n>open source, rather than positive support for it.\n\nThe only real problem for open source is ensuring that the 'reference' copies of the software are not all controlled by one company. Which is another reason why it is in developers interests to continue contributing to open source projects. If no-one contributes, Red Hat will have an effective monopoly.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 21 Jun 1999 13:01:29 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "At 10:14 PM 6/20/99 -0400, Clark Evans wrote:\n\n>However, RedHat's business is NOT distribution, it is\n>standardization. \n\nThank God and I deleted the rest, thank you.\n\n> And this allows them to spend money\n>on open source _if_ it is in their best interest.\n\nWow, a tautology! I've heard these are really hard\nto prove correct.\n\n> I would\n>argue that it is in their best interest now, but it\n>won't be in a few years after they have a fimly \n>established monopoly.\n\n\n>\n>Thus, your exceptions point to a deeper problem with\n>open source, rather than positive support for it.\n\nThe deeper problem being that open source MIGHT become\nas tied to one vendor as closed source, if I read your\nargument correctly.\n\nWell, let's imagine for a moment that I concede that\npoint...\n\nThere's still a difference...you still get the source.\n\nI've got candles for sale if you need some to burn at\nyour Open Source Means Everyone Works For Free Always \nshrine.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Sun, 20 Jun 1999 20:50:06 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Clark Evans said at �Re: [HACKERS] BSD vs. GPL�. [1999/06/20 22:14]\n\n> I doubt that it is anything \"significant\", and if it is, I would\n> call Red Hat's situation exceptional. They have a near monopoly on\n> corporate/consumer distributions, and their $80 price tag is the \n> proof. Do you think after the near monopoly becomes a full monopoly\n> that this % of revenue will increase or decrease? I'd bet\n> on the latter. The Microsoft pattern, albeit a much less powerful\n> strain, is about to re-occur. What good is a bunch of software if \n> it can't be named? It isn't. In the software world, a trademark \n> is a name for a standard. And RedHat is about to own it. \n\nI didn't really want to get into this discussion, but I thought it necessary to \npoint out the obvious fact that you can buy RedHat 6.0 from CheapBytes for $3 \non CD. If you have an internet connection, you can download it and burn your \nown CD, or do an FTP install. RH can raise the price tag as much as they like, \nbut we'll still be able to get it for free (or virtually free). Also, FWIW, \nRedHat spent something like 10% of its revenue on R&D last year, which is \npretty good for a company that lost money. After all, if it were microsoft, \nthey'd probably save money by cutting the R&D.\n\n--\nNick Bastin - RBB Systems, Inc.\nThe idea that Bill Gates has appeared like a knight in shining armour to lead \nall customers out of a mire of technological chaos neatly ignores the fact that \nit was he who, by peddling second-rate technology, led them into it in the \nfirst place. - Douglas Adams\n",
"msg_date": "Mon, 21 Jun 1999 01:30:39 -0400",
"msg_from": "Nicholas Bastin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "On Sun, Jun 20, 1999 at 10:14:22PM -0400, Clark Evans wrote:\n> To this end, they will spend some serious doe, but only on improvements\n> and fixes that directly affect the ability of the distribution to ship\n> to a client, thus, we have RPM, device drivers, and GNOME.\n\nBut these improvements make it into other distributions as well.\n\n> I doubt that it is anything \"significant\", and if it is, I would\n> call Red Hat's situation exceptional. They have a near monopoly on\n> corporate/consumer distributions, and their $80 price tag is the \n\nIn the US maybe, but over here in Germany they are way down the chart. \n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 21 Jun 1999 10:17:10 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "On Sun, 20 Jun 1999, Don Baccus wrote:\n\n> I've got candles for sale if you need some to burn at\n> your Open Source Means Everyone Works For Free Always \n> shrine.\n\nDon - an interesting can o' worms Bruce opened, eh?\n\nI find Bruce's argument interesting but one fact makes the\ndebate somewhat moot: Linux doesn't need a FreeBSD emulator.\nUntil it does most will use Linux - and GPL.\n\nI think Stallman goes too far with calling Ousterhout a `parasite'\nbut I nonetheless can't help but suppress a grin as I write this.\n(Trying desperately to avoid a bad pun about being tickled...and\nfailing! ;-)\n\nMaybe I should get a quote for some of your candles so I can\ndo a purchase order? \n\n------- North Richmond Community Mental Health Center -------\n\nThomas Good MIS Coordinator\nVital Signs: tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n \n/* Member: Computer Professionals For Social Responsibility */ \n\n",
"msg_date": "Mon, 21 Jun 1999 08:02:26 -0400 (EDT)",
"msg_from": "Thomas Good <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Michael Alan Dorman <[email protected]> writes:\n> [deleted]\n\nUm, re-reading this in the cold light of morning, it sounds way more\nrancorous and argumentative than I would have liked. Sorry, Bruce, it\nwasn't my intent.\n\nMike.\n",
"msg_date": "21 Jun 1999 12:30:14 -0400",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
},
{
"msg_contents": "Bruce, Marc, et all,\n\nFor those interested in a possible way to make money with \nPostgreSQL, yet, still keep it \"free\", and \"open\", please\nvisit http://distributedcopyright.org and send comments\nto [email protected]\n\nI'm going to fix it up this web site of these weeks (I'm \nhit hard) with a more detailed summary. Richard Stallman \nhas posted some sound advice to the discussion list (to both\nhelp improve it and to also voice his objections). \n\nI think it is possible to form an community-based \norganization having an open business environment that \nwould create, maintain, and sell a 'commercial' database. \nIn this case, one that is \"free\" as in liberty, but not \nnecessarly free as in \"free beer\". Also, I feel that a \ngood amount of corporate support could be generated if an \nappropriate way for investors (and sweat equity developers)\nto get a reasonable return on investment could be established.\n\nAnyway, it is clear to me that no single person\ncan create a new business model, I've tried to \nbootstrap some ideas .. primary from others thoughts\nwhich have been floting around various mailing lists\nfor a while. \n\nIf a core group here is interested, I'll dedicate\nmore time to it. A law firm in Washington DC\nhas made an offer to help work out the legal stuff\npro-bono if there is enough interest from a \ndecent size development community.\n\nPLEASE follow up to [email protected] and\nnot to the hackers list. \n\nBest Wishes,\n\nClark Evans\n\n\n\nDon Baccus wrote:\n> \n> At 10:14 PM 6/20/99 -0400, Clark Evans wrote:\n> \n> >However, RedHat's business is NOT distribution, it is\n> >standardization.\n> \n> Thank God and I deleted the rest, thank you.\n\nDidn't realize that I was preaching, my apologies.\n\n> > And this allows them to spend money\n> >on open source _if_ it is in their best interest.\n> \n> Wow, a tautology! \n\nI don't think this qualifies as a tautology. A tautology\nis an implication where the premise and the consequent\nare identical. Tautologies are useful in some places\nwhere the form of the premise and consequent is different\nand this different form allows the argument to proceed.\nThis pattern is common in many logical proofs, and is\nextremely useful. \n\n> I've heard these are really hard to prove correct.\n\nActually a tautology is always correct.\n\nLet me try and explain what I was saying again:\n\nThe electronic distribution business is a very competitive \nmarket, since the barrier to entry is very small. Thus, \nthere is not a large profit margin, nor is there expectation \nfor a large profit margin. Thus, it is unlikely that any \ncompany in the this business would make a significiant \ninvestment in open source research and development.\nThe tech-support business is almost identical, as\nthere are limited economies of scale.\n\nHowever, the standardization market is by nature \nmonopolistic, i.e., the standard defines the market.\nIn an emerging standards market, initial customers make \ntheir choice based on the quality of each product, however, \nafter a short period, quality becomes secondary as the\nvalue residing in the complementary product and service \nmarket becomes more important. Eventually, competition \nbetween standars becomes price inelastic since the value \nis primarly determined by the size of the complementary \nmarket, and no longer determined by quality. This allows \nthose who control a large, established standards market to \nextract large tax on the customers in the market.\n\nThus, I'm just pointing out that Red Hat is forming\na market, that they will try to own using what ever\nlegal might they can. Certainly they will be weaker\nthan Microsoft at protecting their market since they\nlack copyright law to aid them (or do they, since \nthey are making a compilation, which is also \ncopyrightable -- this will be discovered in court? ). \nEven so, they still have trademark protection, and,\npossibly with future corporate deals, patent protection. \n\n> The deeper problem being that open source MIGHT become\n> as tied to one vendor as closed source, if I read your\n> argument correctly.\n\nI think you have it, the issue moves from the right\nto have open source, to the right to determine what\nis in the \"standard\" distribution. \n\nMy argument is that \"open source\" is only half of\nthe problem, \"open standard\", via trademarks is\nthe other half. And it seems to me that many people\nare still missing this point. But, you are correct that\nit is much less of a problem.\n\nI'd still like to know what percentage of profit\nRedHat gives back to the community. If it is large\nnow, it would be cool if they put it in writing --\nthat it will stay large well after the RedHat \ntradename becomes a household word. Perhaps Linus\ncould work this out using the Linux trademark.\n\nIn any case, \"free of price\" should be the least of \nour concerns, don't focus on price, focus on freedom.\n\nAnyway, so much for the rambling.\n\n> Well, let's imagine for a moment that I concede that\n> point...\n> \n> There's still a difference...you still get the source.\n\nAnd this is cool, which is why it is no where near\nas big as a problem as Microsoft.\n\n> I've got candles for sale if you need some to burn at\n> your Open Source Means Everyone Works For Free Always\n> shrine.\n\nWell, to my recollection, I never said this or anything like\nit. If I did, would you help correct me by being more explicit?\n\n\nNicholas Bastin wrote:\n> Clark Evans said:\n> > I doubt that it is anything \"significant\", and if it is, I would\n> > call Red Hat's situation exceptional. They have a near monopoly on\n> > corporate/consumer distributions, and their $80 price tag is the\n> > proof. Do you think after the near monopoly becomes a full monopoly\n> > that this % of revenue will increase or decrease? I'd bet\n> > on the latter. The Microsoft pattern, albeit a much less powerful\n> > strain, is about to re-occur. What good is a bunch of software if\n> > it can't be named? It isn't. In the software world, a trademark\n> > is a name for a standard. And RedHat is about to own it.\n> \n> I didn't really want to get into this discussion, but I thought it necessary to\n> point out the obvious fact that you can buy RedHat 6.0 from CheapBytes for $3\n> on CD. If you have an internet connection, you can download it and burn your\n> own CD, or do an FTP install. RH can raise the price tag as much as they like,\n> but we'll still be able to get it for free (or virtually free).\n\nI am familiar with CheapBytes. I am also weary about trademark law\nbeing used against companies like CheapBytes. It won't happen yet\nsince RedHat would get too much bad press. However, in my non-legal\nopinion, compilation copyright law and trademarks could be used\nto successfully limit copying of distributions. This I guess\nwe will have to wait and see. I'm not a lawer, so I can't say one\nway or the other.\n\n> Also, FWIW, RedHat spent something like 10% of its revenue \n> on R&D last year, which is pretty good for a company that lost money. \n\nInteresting... Why are they loosing money? Answer: Beacuse they \nare trying to establish a market monopoly by owning a standard. \nOtherwise Dell and other companies would not be taking an equity \ninterest. You can expect this to change once most of the competition \nis eliminated. Profits will be prevalent, and R&D will drop \nlike a rock.\n\n> After all, if it were microsoft, they'd probably save money by cutting the R&D.\n\nNo. Microsoft looses tons of money in R&D on new markets. However,\nonce the market is established, then they jack up the rents and\ncut the R&D. You have only seen one side of RedHat now, the side \ntrying to establish a market. With Dell and others owning interest, \nthey will be forced to behave like Microsoft when it is time.\n\n\nThus, RedHat is becoming a trustee for an operating system standard.\nYet, we have no legal agreement by which we can hold them\naccountable. Instead, we only have market forces, which do\nnot work in a monopolistic environment. Better than Microsoft?\nSure. Can it be better? I think so. Alot better.\n\nBest,\n\nClark Evans\n",
"msg_date": "Mon, 21 Jun 1999 16:14:38 -0400",
"msg_from": "Clark Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BSD vs. GPL"
}
] |
[
{
"msg_contents": "I have cleaned up some problems seen in the parser and planner when\na query involves both subselects and grouping (\"grouping\" = GROUP BY,\nHAVING, and/or aggregate functions). I am not sure whether I've\nfixed everything in those modules, though, because all my remaining\ntest cases are falling over due to rewriter bugs. For example:\n\nCREATE TABLE t1 (name text, value float8);\nCREATE\n\nSELECT name FROM t1 WHERE name IN\n(select name from t1 group by name having 2 = count(*));\nERROR: SELECT/HAVING requires aggregates to be valid\n\nThe trouble in this case is that the rewriter pushes count(*) down\ninto a third level of sub-select, which the planner quite rightly\ndecides is a constant with respect to the second level, whereupon\nthere don't seem to be any aggregates in the second-level subselect.\n\nI'm not sufficiently clear about what the rewriter is trying to do here\nto risk trying to fix it. I think the problem is a mistaken application\nof modifyAggrefMakeSublink(), whose comment claims\n *\tCreate a sublink node for a qualification expression that\n *\tuses an aggregate column of a view\nThere is no view nor aggregate column in sight here, but the routine\nis getting invoked anyway. Jan, any thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jun 1999 21:37:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status report: subselect + grouping problems"
}
] |
[
{
"msg_contents": "\nAs I've been playing with the PL/Perl implementation, it has dawned on me a\nfairly simple, but nice feature could be added.\n\nI would like to add the following command:\n\nLOAD PACKAGE 'package-name';\n\nLike the current 'LOAD' it would treat 'package-name'\nas shared library. But it would also call an intialization function\nin the library (package_init maybe?).\n\nFor instance, a user may type:\n\nLOAD PACKAGE 'plperl';\n\nThis would not only load the shared library \"plperl.so\", but call\nthe function \"package_init\" which could the use the SPI facilities\nto properly setup the system tables for the new language.\n\nNew types could be packaged the same way.\n\nThis would give a very modular way to add 'packages' of functionality.\n\nWhat do you think?\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Sun, 20 Jun 1999 21:39:30 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "idea for 'module' support"
},
{
"msg_contents": "On Sun, 20 Jun 1999, Mark Hollomon wrote:\n\n> For instance, a user may type:\n> \n> LOAD PACKAGE 'plperl';\n> \n> This would not only load the shared library \"plperl.so\", but call\n> the function \"package_init\" which could the use the SPI facilities\n> to properly setup the system tables for the new language.\n> \n> New types could be packaged the same way.\n> \n> This would give a very modular way to add 'packages' of functionality.\n> \n> What do you think?\n\nLots of people do this, but you might want to take a look at how we do\nthis in Dents, just because it's moderately well documented. I did the\ninitial version inspired by how GTK+ loads new themes, but the basic\nidea is the same everywhere: dlopen() an object, resolve symbols in it,\nuse those symbols to populate a struct filled with function pointers\nand whatnot.\n\nYou can find docs on how to write Dents modules at:\nhttp://www.dents.org/modules/modules.html\n\nFor how we actually do it, look in the source code under \"src/*module*\".\n\nI personally think that this is a neat idea in general and might be very\nnifty for Postgres.\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n \"There is no spoon.\"\n\n",
"msg_date": "Sun, 20 Jun 1999 21:58:56 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] idea for 'module' support"
},
{
"msg_contents": "\nHi Mark,\n\nThis is truly a very nice idea, since it would from an application\ndevelopers perspective simplify the process of getting users to for\ninstance update the inner works of a postgresql driven application.\n\nIt would be great to also in a package be able to include SQL scripts, \nso one could do \"patches\" for database applications that are easily\ninstalled by end-users/administrators. \n\nFor instance, updating stored procedures, classes and external .so\nlibs could be done with a simple LOAD PACKAGE 'package-name' facility.\n\nI'm uncertain if this really fits in the pgsql db itself, it might be\nbetter to have it outside of the main package, as a utility (suite)\nlike pg_package-install/update/list.\n\nSome more thoughts on what would fit nicely in the package-idea:\n - Scriptability. A package should be a PostgreSQL \"SQL\" script,\n runable from psql.\n - Digitally signed (encrypted?) packages\n - Version control\n - External transport (ftp, httpd from update server)\n\nI think the XEmacs 21.x package handling might be a good model, which\naccording to me works very well even for relatively newbie users.\n\nJust some thoughts, might need refining and discussion, but the main\nidea is to have an increased modularity, adding/updating/managing\nseparate modules easily. \n\nBe well,\n/Daniel\n\n\nMark Hollomon writes:\n > I would like to add the following command:\n > \n > LOAD PACKAGE 'package-name';\n ...\n > LOAD PACKAGE 'plperl';\n > \n > This would not only load the shared library \"plperl.so\", but call\n > the function \"package_init\" which could the use the SPI facilities\n > to properly setup the system tables for the new language.\n > New types could be packaged the same way.\n > This would give a very modular way to add 'packages' of functionality.\n\n-- \n_______________________________________________________________ /\\__ \n Daniel Lundin - MediaCenter, UNIX and BeOS Developer \\/\n http://www.umc.se/~daniel/\n",
"msg_date": "Mon, 21 Jun 1999 11:44:06 +0200 (CEST)",
"msg_from": "Daniel Lundin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] idea for 'module' support"
},
{
"msg_contents": "Mark Hollomon wrote:\n\n>\n>\n> As I've been playing with the PL/Perl implementation, it has dawned on me a\n> fairly simple, but nice feature could be added.\n>\n> I would like to add the following command:\n>\n> LOAD PACKAGE 'package-name';\n>\n> Like the current 'LOAD' it would treat 'package-name'\n> as shared library. But it would also call an intialization function\n> in the library (package_init maybe?).\n>\n> For instance, a user may type:\n>\n> LOAD PACKAGE 'plperl';\n>\n> This would not only load the shared library \"plperl.so\", but call\n> the function \"package_init\" which could the use the SPI facilities\n> to properly setup the system tables for the new language.\n>\n> New types could be packaged the same way.\n>\n> This would give a very modular way to add 'packages' of functionality.\n\n 1. All the functionality required to install such a package\n is only needed once per database (or if thrown into\n template1 once per installation). But the entire shared\n object has to be linked into each time a backend needs\n one single function from it. I'm not sure if it's such a\n good idea to waste more and more address space for\n administrative stuff that isn't required at runtime.\n\n 2. Most of such functionality requires PostgreSQL superuser\n rights (like installing C language functions). Thus it is\n useless for a regular user.\n\n 3. Some of the features might be customizable. Procedural\n languages for example can be installed as trusted ones or\n not. Trusted languages can be used by any regular user to\n CREATE FUNCTION, untrusted ones can't. Placing the\n installation procedure inside the module itself doesn't\n make things easier here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 21 Jun 1999 18:21:43 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] idea for 'module' support"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> 1. All the functionality required to install such a package\n> is only needed once per database (or if thrown into\n> template1 once per installation). But the entire shared\n> object has to be linked into each time a backend needs\n> one single function from it. I'm not sure if it's such a\n> good idea to waste more and more address space for\n> administrative stuff that isn't required at runtime.\n\n*shrug*. I don't see this as very important, but then, I'm not trying\nto run a server that is servicing 100's or 1000's of requests per\nhour either. A valid point in the general case.\n\nHmmmm. Of course, nothing says that the package_init stuff has be in\nthe same file as the runtine stuff. But then what would be the\ndifference from a \\i of a SQL script?\n\n> \n> 2. Most of such functionality requires PostgreSQL superuser\n> rights (like installing C language functions). Thus it is\n> useless for a regular user.\n\nWell, no more useless than C language functions right now. I had\nin the back of my mind that there would be an 'approved' module\nlocation. If a module resides there, then loading would occur as\nif done by the superuser. It would be the admin's responibility to\nmake sure that that location was secure.\n\nOf course, depending on the way the security model in PostgreSQL is\nimplemented, this may not be possible. I haven't looked.\n\n> \n> 3. Some of the features might be customizable. Procedural\n> languages for example can be installed as trusted ones or\n> not. Trusted languages can be used by any regular user to\n> CREATE FUNCTION, untrusted ones can't. Placing the\n> installation procedure inside the module itself doesn't\n> make things easier here.\n\nPassing parameters couldn't be that hard :\n\nLOAD PACKAGE 'package_name' [WITH [attr=value]+];\n\nThank you for the feedback. I think the concept is worth while.\nBut apparently a good bit of work is needed on the mechanics.\n\n-- \n\nMark Hollomon\[email protected]\n",
"msg_date": "Mon, 21 Jun 1999 14:18:11 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] idea for 'module' support"
}
] |
[
{
"msg_contents": "Can we submit changes for 6.6 yet?\n\n-- \nMark Hollomon\[email protected]\n",
"msg_date": "Sun, 20 Jun 1999 21:40:25 -0400",
"msg_from": "Mark Hollomon <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.6 changes?"
},
{
"msg_contents": "> Can we submit changes for 6.6 yet?\n\nIf you do, they will be kept as patches in someone's mailbox for\nanother couple of weeks. You might want to wait...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 21 Jun 1999 02:15:37 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 changes?"
},
{
"msg_contents": "> Can we submit changes for 6.6 yet?\n> \n> -- \n> Mark Hollomon\n> [email protected]\n> \n> \n\nYou can, and they will be applied as soon as we start 6.6 development.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Jun 1999 11:08:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6 changes?"
}
] |
[
{
"msg_contents": "OK, so I'm working on updating the RH Linux rpm of Postgres to our new\nrelease. And so I'll have to figure out how to *really* install all of\nthe Postgres components to make this work. Anyway, I've already asked\nabout the python installation procedure: thanks for the tips and I'll\ntry it out soon.\n\nIn the meantime, I've got other nagging problems/issues/questions.\nHere is one:\n\nOn my linux box, psql has always built as a static app, not using the\nlibpq.so shared library. There seem to be two issues:\n- libpq.so has not been installed at the time psql is built, so the\nshared library is not available\n- the link step for psql points at the libpq source directory, which\ncontains only the .a library, not the sharable library since that is\nbuilt on the fly during the library installation.\n\nI can see how to modify the psql makefile to get a version using\nshared libraries, but istm that one would really like to phase the\ninstallation so that libraries are actually installed before apps need\nto be linked. \n\nShould we just build the sharable library during the \"make all\" rather\nthan the \"make install\"? Or perhaps it would be better to install the\nlibrary(ies) and then move on to building apps.\n\nComments?\n\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 21 Jun 1999 06:08:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Building Postgres"
},
{
"msg_contents": "> - the link step for psql points at the libpq source directory, which\n> contains only the .a library, not the sharable library since that is\n> built on the fly during the library installation.\n> Comments?\n\nI am wrong about this one (the shared library *is* built during \"make\nall\"), but am still looking for suggestions for the right way to phase\nthe building for an installation. Do other platforms have statically\nbuilt apps too?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 21 Jun 1999 14:25:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Building Postgres"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I am wrong about this one (the shared library *is* built during \"make\n> all\"), but am still looking for suggestions for the right way to phase\n> the building for an installation. Do other platforms have statically\n> built apps too?\n\nThe only really good, cross-platform solution that I know about is to\nstart using GNU \"libtool\" to manage the construction of the shared\nlibraries and the applications that depend on them. There are enough\ndifferent ways to handle (or mishandle) shared libs on different Unix\nplatforms that I do not think it a good use of our time to try to solve\nthe problem piecemeal; we'd just be reinventing libtool, probably not\nvery well.\n\nI have it on my to-do list to incorporate libtool into the Postgres\nbuild system, but I have been putting off actually doing anything,\nbecause I don't think that the current release of libtool is quite there\non supporting multiple levels of library dependencies (pgtclsh depends\non libpgtcl.so depends on libpq.so...). This feature was originally\npromised for libtool 1.3 but has been put off to 1.4.\n\nIn the meantime, I'd suggest living with the static-library build, or\nelse installing libpq and then repeating the build step for psql...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jun 1999 17:35:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Building Postgres "
},
{
"msg_contents": "> In the meantime, I'd suggest living with the static-library build, or\n> else installing libpq and then repeating the build step for psql...\n\nFor v6.5, I think I'm going to do the phased build (the \"repeating\"\noption). But I'm pretty sure I'll do it with a patch to the top level\nMakefile, and will suggest it as a feature for v6.6, so this phasing\ncan be repeated.\n\nIt's an interesting problem, since preparing an rpm involves building\nthe entire app, and then copying pieces like libraries into the binary\nrpm file. For the basic pieces this looks fairly straightforward,\nexcept for the phasing problem mentioned above, but for some packages\nlike perl and pythong it apparently involves modifying configuration\nfiles in the perl or python distribution itself. Hopefully there will\nbe a reasonable way to do it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 22 Jun 1999 05:44:37 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Building Postgres"
},
{
"msg_contents": "> For v6.5, I think I'm going to do the phased build (the \"repeating\"\n> option).\n\nOK, I think I've got a good start at a Postgres-6.5 rpm set. It will\nhave the apps using shared libraries, rather than static links. It\nincludes more interfaces than past rpms, including ODBC, and I've\nseparated out the language-specific features into separate rpms (e.g.\nthe tcl interfaces are in postgres-tcl-6.5-1.rpm).\n\nI'm now trying to package the perl (and next, python) interfaces. Can\nsomeone with perl installation experience give me some hints on what\nactually needs to be installed and how it has to happen?\n\nThe Postgres source tree uses a perl-based make system which ends up\nwith very installation-specific and perl-version-specific target\npaths, but I don't know if these paths are actually used in the final\nproduct. Will I need to put Makefile.PL, etc., in the binary rpm\nitself, and build the perl interface on the fly for every target\nmachine? Can I instead just plop some files into the proper place on\nthe target machine in a version-independent way?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 28 Jun 1999 15:02:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perl library (was Building Postgres)"
},
{
"msg_contents": "\n> I'm now trying to package the perl (and next, python) interfaces. Can\n> someone with perl installation experience give me some hints on what\n> actually needs to be installed and how it has to happen?\n\nMe thinks the guy who's building the Debian packages should have some\nexperience with these. IIRC that'd be Oliver Elphick (?).\n\n(just imagine, redhat and debian cooperating, hell must be freezing\nover ;)\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n",
"msg_date": "Mon, 28 Jun 1999 17:14:48 +0200",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > For v6.5, I think I'm going to do the phased build (the \"repeating\"\n> > option).\n> \n> OK, I think I've got a good start at a Postgres-6.5 rpm set. It will\n> have the apps using shared libraries, rather than static links. It\n> includes more interfaces than past rpms, including ODBC, and I've\n> separated out the language-specific features into separate rpms (e.g.\n> the tcl interfaces are in postgres-tcl-6.5-1.rpm).\n> \n> I'm now trying to package the perl (and next, python) interfaces. Can\n> someone with perl installation experience give me some hints on what\n> actually needs to be installed and how it has to happen?\n> \n> The Postgres source tree uses a perl-based make system which ends up\n> with very installation-specific and perl-version-specific target\n> paths, but I don't know if these paths are actually used in the final\n> product. Will I need to put Makefile.PL, etc., in the binary rpm\n> itself, and build the perl interface on the fly for every target\n> machine? Can I instead just plop some files into the proper place on\n> the target machine in a version-independent way?\n\nThe incantation\nperl -MConfig -e 'print $Config{archlib},\"\\n\"'\n\nwill give you the directory where things need to go.\n\nThe pm file goes directly in archlib. The sharedlib and the bootstrap\nfile go in <archlib>/auto/<extension-name>\n\n<archlib>/Postgres.pm\n<archlib>/auto/Postgres/Postgres.so\n<archlib>/auto/Postgres/Postgres.bs\n\nThat would be a start.\n\n---\nMark Hollomon\[email protected]\n",
"msg_date": "Mon, 28 Jun 1999 11:50:27 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Maarten Boekhold wrote:\n >\n >> I'm now trying to package the perl (and next, python) interfaces. Can\n >> someone with perl installation experience give me some hints on what\n >> actually needs to be installed and how it has to happen?\n >\n >Me thinks the guy who's building the Debian packages should have some\n >experience with these. IIRC that'd be Oliver Elphick (?).\n \nThe Perl package gave me the most trouble; the more so since I rarely\nuse Perl and only have the vaguest notion of what's going on in the pgperl\nbuild!\n\nHere are extracts from the Debian makefile:\n\nHERE := $(shell pwd)\ndebtmp= $(HERE)/debian/tmp # pgperl package gets put in \n # $(debtmp)/../libpgperl\n\nsrc/config.cache:\n cd src &&\\\n echo /usr/include/ncurses /usr/include/readline | \\\n ./configure --prefix=$(HERE)/debian/tmp/usr/lib/postgresql \\\n --with-template=$(TEMPLATE) \\\n --with-tcl \\\n --enable-locale \\\n --with-pgport=5432\n\nperl-config: src/config.cache\n cd src/interfaces/perl5 && \\\n INSTALLDIRS=perl \\\n PREFIX=$(debtmp)/usr \\\n POSTGRES_HOME=$(debtmp)/usr/lib/postgresql \\\n INSTALLMAN1DIR=$(debtmp)/usr/man/man1 \\\n INSTALLMAN3DIR=$(debtmp)/usr/man/man3 \\\n OVERRIDE=true \\\n perl Makefile.PL && \\\n touch perl-config\n\n\nperl-build: build\n cp -a src/include $(debtmp)/usr/lib/postgresql/\n cp src/interfaces/libpq/*.h $(debtmp)/usr/lib/postgresql/include\n cd src/interfaces/perl5 && \\\n $(MAKE) PREFIX=$(debtmp)/usr \\\n POSTGRES_HOME=$(debtmp)/usr/lib/postgresql \\\n INSTALLDIRS=perl \\\n INSTALLMAN1DIR=$(debtmp)/usr/man/man1 \\\n INSTALLMAN3DIR=$(debtmp)/usr/man/man3 \\\n LDDLFLAGS=\"-shared -L$(debtmp)/usr/lib\" \\\n LDFLAGS=-L../libpq \\\n LDLOADLIBS=\"-L../libpq -lpq -lc\"\n rm -rf $(debtmp)/usr/lib/postgresql/include\n\n\nbinary-arch: build-test perl-build install-python\n # patch current arch into libpgperl's directory and file lists\n sed 's/%ARCH%/$(ARCH)/g' <debian/libpgperl.dirs.in >debian/libpgperl.dirs\n sed 's/%ARCH%/$(ARCH)/g' <debian/libpgperl.files.in >debian/libpgperl.files\n dh_installdirs -a\n # install files into the debian/<package> trees\n ...\n cd src/interfaces/perl5 && \\\n $(MAKE) PREFIX=$(debtmp)/usr \\\n POSTGRES_HOME=$(debtmp)/usr/lib/postgresql \\\n INSTALLDIRS=perl \\\n INSTALLMAN1DIR=$(debtmp)/usr/man/man1 \\\n INSTALLMAN3DIR=$(debtmp)/usr/man/man3 \\\n LDDLFLAGS=\"-shared -L$(debtmp)/usr/lib\" \\\n LDFLAGS=-L../libpq \\\n pure_install\n ...\n rm -f debian/libpgperl/usr/lib/perl5/i386-linux/5.004/auto/Pg/.packlist\n rm -rf $(debtmp)/usr/lib/perl5\n rm -rf debian/libpgperl/usr/lib/perl5/i386\n\n\n\nIf you want the full works, download the Debian source package for\nPostgreSQL - 6.4.2 will do (Debian's current unstable version, shortly\nto be replaced by 6.5)\n\n >(just imagine, redhat and debian cooperating, hell must be freezing\n >over ;)\n \nJust comfortably warm, thanks!\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The Spirit of the Lord is upon me, because he hath \n anointed me to preach the gospel to the poor; he hath \n sent me to heal the brokenhearted, to preach \n deliverance to the captives, and recovering of sight \n to the blind, to set at liberty them that are \n bruised...\" Luke 4:18 \n\n",
"msg_date": "Mon, 28 Jun 1999 17:20:54 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres) "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The Postgres source tree uses a perl-based make system which ends up\n> with very installation-specific and perl-version-specific target\n> paths, but I don't know if these paths are actually used in the final\n> product. Will I need to put Makefile.PL, etc., in the binary rpm\n> itself, and build the perl interface on the fly for every target\n> machine? Can I instead just plop some files into the proper place on\n> the target machine in a version-independent way?\n\nI believe you would be making unsafe assumptions about both the\ninstalled version of Perl and the location of the Perl install tree\nif you do not run through the regular Perl module install procedure\n(\"perl Makefile.PL ; make ; make install\"). There is also a permissions\nissue, although if rpms are normally unpacked as root that might not\nmatter.\n\nI'm not very familiar with the RPM installation culture --- perhaps you\ncan get away with packaging a Perl module that is dependent on the\nassumption that a particular existing RPM of Perl has been installed.\nBut I'd suggest keeping it separate from the main Postgres RPM ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 1999 13:26:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perl library (was Building Postgres) "
},
{
"msg_contents": "Oliver Elphick wrote:\n> \n> Maarten Boekhold wrote:\n> >\n> >> I'm now trying to package the perl (and next, python) interfaces. Can\n> >> someone with perl installation experience give me some hints on what\n> >> actually needs to be installed and how it has to happen?\n> >\n> >Me thinks the guy who's building the Debian packages should have some\n> >experience with these. IIRC that'd be Oliver Elphick (?).\n> \n> The Perl package gave me the most trouble; the more so since I rarely\n> use Perl and only have the vaguest notion of what's going on in the pgperl\n> build!\n> \n\n\nI don't know the peculiarities of every distribution, but the following\nwill work in any case:\n\ntar xvzf postgres-6.5.tar.gz\ncd $POSTGRES_HOME/src\n./configure\nmake install\ninitdb --pgdata=/usr/local/pgsql/data --pglib=/usr/local/pgsql/lib\npostmaster -S -D /usr/local/pgsql/data\ncreateuser my_userid\n\nmake sure, the dynamic linker can find libpq.so\neg: export LD_LIBRARY_PATH=/usr/local/pgsql/lib\n\ncd interfaces/pgsql_perl5\nperl Makefile.PL\nmake test\nsu\nmake install\n\n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Mon, 28 Jun 1999 21:21:33 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "> cd interfaces/pgsql_perl5\n> perl Makefile.PL\n> make test\n> su\n> make install\n\nThe rpm format would prefer to build *all* files on a source machine,\nand then move them from the rpm file into the appropriate places on\ntarget machines. The problem is that, apparently,\n perl Makefile.PL\ngenerates paths which are *very* specific to the version of perl on\nthe source machine, and which may not be compatible with versions of\nperl on the target machines. Assuming that the code generated is a bit\nmore tolerant of version changes in perl, then I need to figure out\nwhere the code would go on the target machines. One possibility is to\nsimply lift all of the perl5 source tree into the rpm, and actually do\nthe build on the target machine from scratch. afaik, this is *not* the\npreferred style for rpms.\n\nMark Hoffman and Oliver Elphick (who also has helped with my python\nquestions) have given me some good clues; I'll keep asking questions\nuntil I can get something which works...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 29 Jun 1999 01:36:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> ... Assuming that the code generated is a bit\n> more tolerant of version changes in perl,\n\nI believe that's nearly as risky as hardwiring the install path.\nFor example, we already know that the existing perl5 interface\n*source* code is broken for the latest Perl releases (5.005something),\nnevermind trying to make the object code compatible. (I'm going\nto try to figure out whether we can tweak the source to work under\neither version ... it may take conditional compilation :-( ... if\nanyone else is in a bigger hurry than me, be my guest ...)\n\n> One possibility is to\n> simply lift all of the perl5 source tree into the rpm, and actually do\n> the build on the target machine from scratch. afaik, this is *not* the\n> preferred style for rpms.\n\nIt may be swimming upstream in the RPM culture, but it should work\nand work reliably. *Not* doing the expected configuration on the\ntarget machine will be swimming upstream in the Perl culture, and\nI'll wager that the undertow is a lot more dangerous in that case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 1999 22:04:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres) "
},
{
"msg_contents": "> It may be swimming upstream in the RPM culture, but it should work\n> and work reliably. *Not* doing the expected configuration on the\n> target machine will be swimming upstream in the Perl culture, and\n> I'll wager that the undertow is a lot more dangerous in that case.\n\nYup. I'll probably end up trying to package all of the source code\ninto the binary rpms, with an install script. But I think my first cut\nwill try to force the generated files into the correct place. I've got\nlots of other interfaces to handle, and want to get the rpms out as a\nbeta trial asap.\n\nWe'll see how long it takes someone to break the rpm :/\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 29 Jun 1999 04:57:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> \n> > It may be swimming upstream in the RPM culture, but it should work\n> > and work reliably. *Not* doing the expected configuration on the\n> > target machine will be swimming upstream in the Perl culture, and\n> > I'll wager that the undertow is a lot more dangerous in that case.\n> \n> Yup. I'll probably end up trying to package all of the source code\n> into the binary rpms, with an install script. But I think my first cut\n> will try to force the generated files into the correct place. I've got\n> lots of other interfaces to handle, and want to get the rpms out as a\n> beta trial asap.\n\nWouldn't it be better to create a CPAN package and distribute it from\n*there*? I realize that this method has the problem that package\nupdates and PostgreSQL updates could become desynchronized, but I\nthink this would address the issue adequately.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "29 Jun 1999 10:49:26 -0400",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "> Wouldn't it be better to create a CPAN package and distribute it from\n> *there*? I realize that this method has the problem that package\n> updates and PostgreSQL updates could become desynchronized, but I\n> think this would address the issue adequately.\n\nWell, the problem I'm trying to solve is rpm packaging, which is not\nnecessarily the same as solving the perl distribution issue.\nHowever...\n\nWould a CPAN package be more amenable to an rpm packaging? That is, if\nwe had a CPAN distribution (generated locally, of course), could I\nplop that into an rpm and have a standard, easy procedure to follow\nwithin the rpm to get the stuff extracted and installed onto a\nmachine?? I'm blissfully ignorant about CPAN and the packaging\nconventions, but would like suggestions.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 29 Jun 1999 15:28:08 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Wouldn't it be better to create a CPAN package and distribute it from\n>> *there*?\n\n> Would a CPAN package be more amenable to an rpm packaging? That is, if\n> we had a CPAN distribution (generated locally, of course), could I\n> plop that into an rpm and have a standard, easy procedure to follow\n> within the rpm to get the stuff extracted and installed onto a\n> machine?? I'm blissfully ignorant about CPAN and the packaging\n> conventions, but would like suggestions.\n\nI believe that what you find in the interfaces/perl5 subdirectory\n*is* a CPAN package. Tarred and gzipped, that fileset could be\nsubmitted to CPAN (or it could be if it was self-contained, rather than\ndependent on libpq, that is). \"perl Makefile.PL; make; make install\"\nis precisely what Perl users expect to have to do with a CPAN package.\n\nI'm not sure if it's worth trying to come up with a self-contained\nCPAN package or not --- we could probably make one, using libpq sources\nand the necessary backend include files, but would it really be worth\nmuch to anyone who didn't also have a Postgres server? Seems like you\nneed the full distribution anyway, in most situations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 17:32:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres) "
},
{
"msg_contents": "On Tue, 29 Jun 1999, Thomas Lockhart wrote:\n\n> The rpm format would prefer to build *all* files on a source machine,\n> and then move them from the rpm file into the appropriate places on\n> target machines. The problem is that, apparently,\n> perl Makefile.PL\n> generates paths which are *very* specific to the version of perl on\n\nThat should not be the case. It depends on how the Postgres package\nMakefile.PL is written, but in general it should go into a general use\nperl5 directory, such as /usr/lib/perl5/site_perl, which is anything but\nperl version dependant. If it generates paths that are perl version\ndependent then the Makefile.PL is busted.\n\n> One possibility is to\n> simply lift all of the perl5 source tree into the rpm, and actually do\n> the build on the target machine from scratch. afaik, this is *not* the\n> preferred style for rpms.\n\nNo, that is definitely not th way to handle packages distributed by rpm.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 00:24:17 -0400 (EDT)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "On Mon, 28 Jun 1999, Thomas Lockhart wrote:\n\n> machine? Can I instead just plop some files into the proper place on\n> the target machine in a version-independent way?\n\nOn a Red Hat system you can use /usr/lib/perl5/site_perl, for example.\nThat is not dependent on perl version.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 00:26:12 -0400 (EDT)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perl library (was Building Postgres)"
},
{
"msg_contents": "Cristian Gafton wrote:\n> \n> On Tue, 29 Jun 1999, Thomas Lockhart wrote:\n> \n> > The rpm format would prefer to build *all* files on a source machine,\n> > and then move them from the rpm file into the appropriate places on\n> > target machines. The problem is that, apparently,\n> > perl Makefile.PL\n> > generates paths which are *very* specific to the version of perl on\n> \n> That should not be the case. It depends on how the Postgres package\n> Makefile.PL is written, but in general it should go into a general use\n> perl5 directory, such as /usr/lib/perl5/site_perl, which is anything but\n> perl version dependant. If it generates paths that are perl version\n> dependent then the Makefile.PL is busted.\n> \n\n\n/usr/local/lib/perl5/site_perl/5.005/i686-linux/Pg.pm\n\nthis is the standard path for all additionally installed modules.\nIt depends on the perl version as well as on the system type.\n\n\n\n> > One possibility is to\n> > simply lift all of the perl5 source tree into the rpm, and actually do\n> > the build on the target machine from scratch. afaik, this is *not* the\n> > preferred style for rpms.\n> \n> No, that is definitely not th way to handle packages distributed by rpm.\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n\n\n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Wed, 30 Jun 1999 06:47:12 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Cristian Gafton wrote:\n> \n> On Mon, 28 Jun 1999, Thomas Lockhart wrote:\n> \n> > machine? Can I instead just plop some files into the proper place on\n> > the target machine in a version-independent way?\n> \n> On a Red Hat system you can use /usr/lib/perl5/site_perl, for example.\n> That is not dependent on perl version.\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n\n\nif you install a perl module in the standard way (make install)\non a RedHat system, you will end up with the modules installed in:\n\n /usr/lib/perl5/site_perl/5.005/i386-linux/\n\nA standard Makfile.PL does not contain any information about \nthe target directories. This is always handled by perl itself.\n\n\n\nEdmund\n\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Wed, 30 Jun 1999 06:56:20 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perl library (was Building Postgres)"
},
{
"msg_contents": "Cristian Gafton <[email protected]> writes:\n> On Tue, 29 Jun 1999, Thomas Lockhart wrote:\n>> The problem is that, apparently,\n>> perl Makefile.PL\n>> generates paths which are *very* specific to the version of perl on\n\n> That should not be the case. It depends on how the Postgres package\n> Makefile.PL is written, but in general it should go into a general use\n> perl5 directory, such as /usr/lib/perl5/site_perl, which is anything but\n> perl version dependant. If it generates paths that are perl version\n> dependent then the Makefile.PL is busted.\n\nAu contraire. That may be true for pure-Perl packages, but packages\nthat involve compiled code (as our perl5 interface surely does) go\ninto strongly version-dependent directories. On my box, for example,\nthe install procedure wants to put stuff into both\n/opt/perl5/lib/site_perl/PA-RISC1.1/auto/Pg/ and\n/opt/perl5/lib/PA-RISC1.1/5.00404/\nwhich means it is dependent on (a) where the Perl install tree is\n(/opt/perl5 is standard on SysV-derived systems, but not elsewhere),\n(b) the hardware architecture (PA-RISC1.1 here), and (c) the Perl\nsubversion (5.004_04 here).\n\nThe dependence on hardware architecture is obviously essential, since\nthe compiled code could not work anywhere else. You can quibble about\nwhether code compiled against 5.004_04 Perl headers would work on a\nwider range of Perl versions, but the fundamental bottom line is that\nPerl expects addon modules to be transported in source form and\ncompiled against the local installation. Violating that assumption\nis a recipe for trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 1999 00:59:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres) "
},
{
"msg_contents": "On Wed, 30 Jun 1999, Edmund Mergl wrote:\n\n> > Makefile.PL is written, but in general it should go into a general use\n> > perl5 directory, such as /usr/lib/perl5/site_perl, which is anything but\n> > perl version dependant. If it generates paths that are perl version\n> > dependent then the Makefile.PL is busted.\n> > \n> \n> \n> /usr/local/lib/perl5/site_perl/5.005/i686-linux/Pg.pm\n\nYou can drop it directly into /usr/local/lib/perl5/site_perl\n\n> this is the standard path for all additionally installed modules.\n> It depends on the perl version as well as on the system type.\n\nNope, this is the standard path for modules that are made to be dependent\non perl version and architecture.\n\nCristian\n--\n----------------------------------------------------------------------\nCristian Gafton -- [email protected] -- Red Hat, Inc.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n UNIX is user friendly. It's just selective about who its friends are.\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 01:13:18 -0400 (EDT)",
"msg_from": "Cristian Gafton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
},
{
"msg_contents": "Hi Cristian! Glad you surfaced ;)\n\n> > machine? Can I instead just plop some files into the proper place on\n> > the target machine in a version-independent way?\n> On a Red Hat system you can use /usr/lib/perl5/site_perl, for example.\n> That is not dependent on perl version.\n\nI see that now. For some reason the postgres v6.4.2 spec file didn't\nquite handle the new perl tree, but between these hints and looking at\nthe mod_perl package (the perl extensions for apache) I think I see\nhow to do things.\n\nThanks for the help! I'm pretty sure I'm close to having some more\ncapable rpms for v6.5 than we've had in the past, but it wasn't at all\ntrivial! It's uncovered several small problems in our make system\nwhich prevented this from happening earlier.\n\nI'll be posting these on the Postgres web site for folks to test;\nwould you like me to send you a copy directly to look at? Somehow we\nshould coordinate this so my postgresql-6.5-1 doesn't conflict with\none coming from a RedHat distribution...\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 30 Jun 1999 13:16:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perl library (was Building Postgres)"
},
{
"msg_contents": "Cristian Gafton wrote:\n> \n> On Wed, 30 Jun 1999, Edmund Mergl wrote:\n> \n> > > Makefile.PL is written, but in general it should go into a general use\n> > > perl5 directory, such as /usr/lib/perl5/site_perl, which is anything but\n> > > perl version dependant. If it generates paths that are perl version\n> > > dependent then the Makefile.PL is busted.\n> > >\n> >\n> >\n> > /usr/local/lib/perl5/site_perl/5.005/i686-linux/Pg.pm\n> \n> You can drop it directly into /usr/local/lib/perl5/site_perl\n> \n\ncertainly you can, but you shouldn't. On our site we are running\nseveral hundred UNIX workstations including 5 different architectures,\nall of them using one NFS mounted /usr/local/lib/perl5 even with\ndifferent versions of perl. This works pretty well and relies on the \ninstallation scheme site_perl/arch/version.\nIf you start dropping modules directly into site_perl you will break\nthis installation scheme.\n\n\n> > this is the standard path for all additionally installed modules.\n> > It depends on the perl version as well as on the system type.\n> \n> Nope, this is the standard path for modules that are made to be dependent\n> on perl version and architecture.\n\n\nNo, if you do not prepare the Makefile.PL you will get as default the \ninstallation scheme described above. You have to make the module explicitely \nindependent of architecture and version, but this is not desirable.\n\n> \n> Cristian\n> --\n> ----------------------------------------------------------------------\n> Cristian Gafton -- [email protected] -- Red Hat, Inc.\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> UNIX is user friendly. It's just selective about who its friends are.\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Wed, 30 Jun 1999 19:39:30 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl library (was Building Postgres)"
}
] |
[
{
"msg_contents": "Hallo,\n\n I think, that I found reproducible bug:\n\n% sql template1\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=> insert into pg_group values ('dummies', 501, '{503}');\nINSERT 18784 1\ntemplate1=> select * from pg_user where usename = 'dummy';\nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil\n-------+--------+-----------+--------+--------+---------+--------+--------\ndummy | 503|f |t |f |t |********| \n(1 row)\n\ntemplate1=> create table tab ( i int );\nCREATE\ntemplate1=> \\z tab\nDatabase = template1\n +----------+--------------------------+\n | Relation | Grant/Revoke Permissions |\n +----------+--------------------------+\n | tab | |\n +----------+--------------------------+\ntemplate1=> grant all on tab to group dummies;\nCHANGE\ntemplate1=> \\z tab\nDatabase = template1\n +----------+----------------------------+\n | Relation | Grant/Revoke Permissions |\n +----------+----------------------------+\n | tab | {\"=\",\"group dummies=arwR\"} |\n +----------+----------------------------+\ntemplate1=> delete from pg_group;\nDELETE 1\ntemplate1=> \\z tab\nNOTICE: get_groname: group 501 not found\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\n\n\n\nAnd in log is:\nPGSQL: NOTICE: get_groname: group 501 not found \nPGSQL: NOTICE: Message from PostgreSQL backend: \nPGSQL: ^IThe Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\nPGSQL: ^II have rolled back the current transaction and am going to terminate your database system connection and exit. \nPGSQL: ^IPlease reconnect to the database system and repeat your query. \n \n\nThis look like that \\z causes (if group doesn't more exist) database crash\n... and I hate crashes ...\n-- \n* David Sauer, student of Czech Technical University\n* electronic mail: [email protected] (mime compatible)\n",
"msg_date": "21 Jun 1999 08:24:32 +0200",
"msg_from": "David Sauer <[email protected]>",
"msg_from_op": true,
"msg_subject": "crash if group doesn't exist (postgres 6.5, linux 2.2.10, rh 6.0)"
}
] |
[
{
"msg_contents": "\n> I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do\n> something similar to what postgres does with segments. I haven't looked\n> at it yet however, so I can't say for sure.\n> \nI would not integrate such functionality into pg_dump, since it is not\nnecessary.\nA good thing though would be a little HOWTO on splitting and/or compressing \npg_dump output.\n\nThe principle is:\n\nbackup:\nmkfifo tapepipe\n( gzip --fast -c < tapepipe | split -b512m - database.dump.gz. ) &\npg_dump -f tapepipe regression\nrm tapepipe\n\nrestore:\ncreatedb regression\ncat database.dump.gz.* | gzip -cd | psql regression\n\nInstead of gzip you could use a faster compressor like lzop, but you get the\nidea :-)\n\nAndreas\n",
"msg_date": "Mon, 21 Jun 1999 09:46:51 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] tables > 1 gig"
},
{
"msg_contents": "> A good thing though would be a little HOWTO on splitting and/or \n> compressing pg_dump output.\n\nI already snarfed the description from Hannu Krosing and have put it\ninto manage-ag.sgml (which I've not yet committed to the web site;\nwill do so soon).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 22 Jun 1999 05:52:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] tables > 1 gig"
}
] |
[
{
"msg_contents": "I sent this message to pgsql-sql but got no reply and since I'm not\nsure if it's a missing feature, bug or something else I'll try\nsending it here. Briefly, I'm getting the error\n ERROR: ExecAgg: Bad Agg->Target for Agg 0\nwhen layering views to get \"nested\" aggregates.\n\nI tried the following SQL under PostgreSQL 6.3 and 6.4:\n\n create table contents (\n id int not null,\n ix int not null,\n volid int not null,\n level int not null,\n bdate datetime not null\n );\n\n create view freecount as\n select c1.id as id, c1.ix as ix, count(c2.ix) as freeness\n from contents c1, contents c2\n where c1.volid = c2.volid\n and c1.bdate <= c2.bdate\n and c1.level >= c2.level\n group by c1.id, c1.ix;\n\n\nUnder 6.3, doing the view creation as an ordinary users I got\n ERROR: pg_rewrite: Permission denied.\nwhich, if I recall, means postgres view support wasn't quite up to\nletting everyone creates views. Doing the view creation as the\npostgres superuser succeeded but doing\n select * from freecount;\nthen crashed the backend.\n\nSo I installed the recently announced postsgres 6.4 RPM for Linux and\ntried again. This time, I could create the view as a normal user and\nit worked fine for that simple select. However, what I actually want\nto do on top of that view is\n\n create view freetapes as\n select id, min(freeness) - 1\n from freecount\n group by id;\n\n(i.e. do the nested aggregation that SQL syntax won't let me do\ndirectly.) That view creates successfully but doing a\n select * from freetapes\nproduces the error message\n\n ERROR: ExecAgg: Bad Agg->Target for Agg 0\n\nand doing the explicit query\n\n select id, min(freeness) - 1\n from freecount\n group by id;\n\ngives the same message. I'm not familiar with postgres internals but\nit looks as though the internal handling of views is still having\ntrouble with those two levels of aggregations despite the underlying\nqueries being OK. As a data point, the view creation and queries work\nfine under Informix IDS 7.3 and Sybase. Is this problem with postgres\nsomething which is a fixable bug, a missing feature request that is\nplanned to arrive soon (maybe it's in 6.5?) or a missing feature which\nisn't going to happen any time soon?\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n",
"msg_date": "Tue, 22 Jun 1999 10:36:35 +0100 (BST)",
"msg_from": "Malcolm Beattie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Views which lead to nested aggregates"
},
{
"msg_contents": "Malcolm Beattie wrote:\n\n> ...\n> gives the same message. I'm not familiar with postgres internals but\n> it looks as though the internal handling of views is still having\n> trouble with those two levels of aggregations despite the underlying\n> queries being OK. As a data point, the view creation and queries work\n> fine under Informix IDS 7.3 and Sybase. Is this problem with postgres\n> something which is a fixable bug, a missing feature request that is\n> planned to arrive soon (maybe it's in 6.5?) or a missing feature which\n> isn't going to happen any time soon?\n\n Up to now (v6.5) this kind of nested aggregates isn't\n supported. Not directly over SQL, nor by views. To be sure\n anything is fine, your views (and however you select from\n them) should be expressable with a regular SELECT too. In\n fact the rewrite system has to try to build such a query for\n it - so if you can't how should the rewriter can?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 22 Jun 1999 12:22:16 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Views which lead to nested aggregates"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Up to now (v6.5) this kind of nested aggregates isn't\n> supported. Not directly over SQL, nor by views. To be sure\n> anything is fine, your views (and however you select from\n> them) should be expressable with a regular SELECT too. In\n> fact the rewrite system has to try to build such a query for\n> it - so if you can't how should the rewriter can?\n\nStill, it ought to either do it or produce a useful error message.\n6.4's error message doesn't qualify as useful in my book. But 6.5's\nbehavior is far worse: it accepts the query and cheerfully generates\na wrong result! That's definitely a bug.\n\nLooking ahead to the larger problem, I believe that the executor is\nperfectly capable of handling nested aggregate plans --- the trick is\nto get the planner to produce one. Maybe we need an extension to the\nparsetree language? It doesn't seem like this ought to be hard to\nsupport, it's just that there's no parsetree configuration that\nrepresents what we want done. Or, maybe we should rethink the division\nof labor between the rewriter and planner --- if the rewriter could\noutput a partially-converted plan tree, instead of a parse tree, then\nit could do as it pleased, but still leave the messy details of lowlevel\nplan optimization to the planner.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jun 1999 10:30:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Views which lead to nested aggregates "
}
] |
[
{
"msg_contents": "subscribe\n",
"msg_date": "Tue, 22 Jun 1999 15:15:59 +0200",
"msg_from": "Guillaume Lairloup <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Well, can't explain the why's...\n\nBut I have the code to add to cash.c and cash.h to add the conversion \nfunctions, but still have to figure out how to get PostgreSQL to recognize \nit... Guessing... it's in fmgrtab.c right?\n\nDuane\n\n> Can someone explain why our money type in 6.5 requires quotes, and why\n> there is no int() function for it?\n> \n> ---------------------------------------------------------------------------\n> \n> \n> test=> create table t(x money);\n> CREATE\n> test=> insert into t values (3.3);\n> ERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n> You will need to rewrite or cast the expression\n> test=> insert into t values (3.33);\n> ERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n> You will need to rewrite or cast the expression\n> test=> insert into t values (money(3.33));\n> ERROR: No such function 'money' with the specified attributes\n> test=> insert into t values (cash(3.33));\n> ERROR: No such function 'cash' with the specified attributes\n> test=> insert into t values (3.33);\n> ERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n> You will need to rewrite or cast the expression\n> test=> insert into t values ('3.33');\n> INSERT 18569 1\n> test=> select int(x) from t;\n> ERROR: No such function 'int' with the specified attributes\n> test=> select int4(x) from t;\n> ERROR: No such function 'int4' with the specified attributes\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Tue, 22 Jun 1999 17:39:19 +0000 (AST)",
"msg_from": "Duane Currie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "Can someone explain why our money type in 6.5 requires quotes, and why\nthere is no int() function for it?\n\n---------------------------------------------------------------------------\n\n\ntest=> create table t(x money);\nCREATE\ntest=> insert into t values (3.3);\nERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n You will need to rewrite or cast the expression\ntest=> insert into t values (3.33);\nERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n You will need to rewrite or cast the expression\ntest=> insert into t values (money(3.33));\nERROR: No such function 'money' with the specified attributes\ntest=> insert into t values (cash(3.33));\nERROR: No such function 'cash' with the specified attributes\ntest=> insert into t values (3.33);\nERROR: Attribute 'x' is of type 'money' but expression is of type 'float8'\n You will need to rewrite or cast the expression\ntest=> insert into t values ('3.33');\nINSERT 18569 1\ntest=> select int(x) from t;\nERROR: No such function 'int' with the specified attributes\ntest=> select int4(x) from t;\nERROR: No such function 'int4' with the specified attributes\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 13:40:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "money data type and conversions"
},
{
"msg_contents": "Bruce,\n\nTODO almost done:\n\nI have three files which implement two functions to convert from \nmoney to integer and from integer to money. Tested it out... works\n\nWho should I send these to to have the changes applied to a later release?\n\nThanx,\nDuane\n\n\n> > Thus spake Bruce Momjian\n> > > Can someone explain why our money type in 6.5 requires quotes, and why\n> > > there is no int() function for it?\n> > \n> > Good question. I wonder if #2 is the answer to #1.\n> > \n> \n> Added to TODO:\n> \n> * Money type requires quotes for input, and no coversion functions\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Tue, 22 Jun 1999 17:03:56 -0300 (ADT)",
"msg_from": "Duane Currie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "\nOn 22-Jun-99 Bruce Momjian wrote:\n> Can someone explain why our money type in 6.5 requires quotes, and why\n> there is no int() function for it?\n\nDunno about the int() stuff, but it seems that I've always had to quote\nmoney. I ass-u-me d it had to do with the $ sign, 'cuze using a float\nwould cause it to crab about the wrong data type.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 22 Jun 1999 17:02:03 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] money data type and conversions"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Can someone explain why our money type in 6.5 requires quotes, and why\n> there is no int() function for it?\n\nGood question. I wonder if #2 is the answer to #1.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 22 Jun 1999 17:14:48 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > Can someone explain why our money type in 6.5 requires quotes, and why\n> > there is no int() function for it?\n> \n> Good question. I wonder if #2 is the answer to #1.\n> \n\nAdded to TODO:\n\n * Money type requires quotes for input, and no coversion functions\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 18:06:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> Bruce,\n> \n> TODO almost done:\n> \n> I have three files which implement two functions to convert from \n> money to integer and from integer to money. Tested it out... works\n> \n> Who should I send these to to have the changes applied to a later release?\n\nSend them over to the patches list. We will apply them to 6.6 because\nthey will require a dump/restore. Thomas will probably do something\nwith them and binary compatible types.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 20:29:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> > > Can someone explain why our money type in 6.5 requires quotes, and why\n> > > there is no int() function for it?\n> > Good question. I wonder if #2 is the answer to #1.\n> Added to TODO:\n> * Money type requires quotes for input, and no coversion functions\n\nAnd while you are at it, add one more entry:\n\n * Remove money type\n\nNUMERIC and DECIMAL are (or should be, if there are rough edges since\nthey are so new) are the SQL92-way to represent currency. And, they\nare compatible with all different conventions, since you can set the\ndecimal place and size of the fractional part as you want.\n\nWe didn't remove the money type for v6.5 since the newer types are so,\nuh, new. But if there are no reported, unfixable problems we should\ndrop the money type for the next release.\n\nAs a sop to make the conversion easier, we can equivalence \"money\" to\n\"numeric(xx,2)\" at that time.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 23 Jun 1999 01:52:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> > > > Can someone explain why our money type in 6.5 requires quotes, and why\n> > > > there is no int() function for it?\n> > > Good question. I wonder if #2 is the answer to #1.\n> > Added to TODO:\n> > * Money type requires quotes for input, and no coversion functions\n> \n> And while you are at it, add one more entry:\n> \n> * Remove money type\n\nAdded to TODO:\n\n\t* Remove Money type and make synonym for decimal(x,2)\n\nWhat about the printing of currency symbol?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 21:58:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> What about the printing of currency symbol?\n\nWon't be missed, at least for anyone writing to SQL92 ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 23 Jun 1999 02:04:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "\nFor people wondering what BeOS is:\n\n\thttp://www.be.com/aboutbe/index.html\n\nSeems it is an OS developed for digital media and network appliances.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 22:16:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "BeOS"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> \t* Remove Money type and make synonym for decimal(x,2)\n> \n> What about the printing of currency symbol?\n\nThat's the one thing that the new types don't offer but that was often\nproblematical anyway. In fact, I even submitted a patch to cash.c to\nremove the currency symbol based on earlier discussions. The only\nreason it wasn't added was that the type was supposed to be removed\nsoon anyway. Perhaps we should apply the patch anyway for now until\nit is removed.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 22 Jun 1999 23:24:24 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > \t* Remove Money type and make synonym for decimal(x,2)\n> > \n> > What about the printing of currency symbol?\n> \n> That's the one thing that the new types don't offer but that was often\n> problematical anyway. In fact, I even submitted a patch to cash.c to\n> remove the currency symbol based on earlier discussions. The only\n> reason it wasn't added was that the type was supposed to be removed\n> soon anyway. Perhaps we should apply the patch anyway for now until\n> it is removed.\n> \n\nNot good to change behavour in a minor release if we can help it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 23:27:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "At 01:52 AM 6/23/99 +0000, Thomas Lockhart wrote:\n\n>NUMERIC and DECIMAL are (or should be, if there are rough edges since\n>they are so new) are the SQL92-way to represent currency. And, they\n>are compatible with all different conventions, since you can set the\n>decimal place and size of the fractional part as you want.\n\nThis is an excellent point. The portable and standard numeric\nand decimal types are the way to go.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 22 Jun 1999 22:44:20 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> Well, can't explain the why's...\n> \n> But I have the code to add to cash.c and cash.h to add the conversion \n> functions, but still have to figure out how to get PostgreSQL to recognize \n> it... Guessing... it's in fmgrtab.c right?\n> \n\nDuane, sonds like people want to remove the Money/cash type and transfer\neveryone over to decimal which has full precision and is much better for\ncurrency.\n\nSorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jun 1999 11:53:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Well, can't explain the why's...\n> > \n> > But I have the code to add to cash.c and cash.h to add the conversion \n> > functions, but still have to figure out how to get PostgreSQL to recognize \n> > it... Guessing... it's in fmgrtab.c right?\n> > \n> \n> Duane, sonds like people want to remove the Money/cash type and transfer\n> everyone over to decimal which has full precision and is much better for\n> currency.\n\nIs there any reason why we don't just leave money in? I know that NUMERIC\nand DECIMAL will handle money amounts but the money type does a few\nextra things related to locale, even if we remove the currency symbol\nand perhaps we should leave that in if people are expected to use the\nnew types. It also determines whether the comma or period is the correct\nseparator, puts separators in the correct place and determines where the\ndecimal point goes. Also, check out what the following does.\n\n select cash_words_out('157.23');\n\nAlthugh there appears to be a bug in that function that chops the last\ncharacter from the output.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 23 Jun 1999 15:24:18 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > > Well, can't explain the why's...\n> > > \n> > > But I have the code to add to cash.c and cash.h to add the conversion \n> > > functions, but still have to figure out how to get PostgreSQL to recognize \n> > > it... Guessing... it's in fmgrtab.c right?\n> > > \n> > \n> > Duane, sonds like people want to remove the Money/cash type and transfer\n> > everyone over to decimal which has full precision and is much better for\n> > currency.\n> \n> Is there any reason why we don't just leave money in? I know that NUMERIC\n> and DECIMAL will handle money amounts but the money type does a few\n> extra things related to locale, even if we remove the currency symbol\n> and perhaps we should leave that in if people are expected to use the\n> new types. It also determines whether the comma or period is the correct\n> separator, puts separators in the correct place and determines where the\n> decimal point goes. Also, check out what the following does.\n> \n> select cash_words_out('157.23');\n> \n> Althugh there appears to be a bug in that function that chops the last\n> character from the output.\n\nMaybe we will have to add '$' symbols to a special case of the numeric\ntype, or add a function to output numeric in money format?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jun 1999 15:53:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions]"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Maybe we will have to add '$' symbols to a special case of the numeric\n> type, or add a function to output numeric in money format?\n\nThat's another thought I had. However, it isn't the '$' symbol. The\nidea is that it takes the symbol from the current locale. That's what\nmakes handling the information so hard, you don't know how many characters\nare used by the currency symbol.\n\nHowever, cash_out and cash_words_out can probably be dropped into the\ndecimal code. There should be some small changes though. In particular\nthe money type moves the decimal point to a position in a fixed string\nof digits but for decimal it should honour the type's positioning.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 23 Jun 1999 16:54:12 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions]"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Maybe we will have to add '$' symbols to a special case of the numeric\n> type, or add a function to output numeric in money format?\n\nI like the last idea (add a formatting function), because it's simple,\nself-contained, and doesn't force any solutions on anyone. Don't want\nany decoration on your number? Just read it out. Don't like the\ndecoration added by the formatting function? Write your own function.\nNo table reconstruction required. With a data-type-driven approach,\nchanging your mind is painful because you have to rebuild your tables.\n\nWe'd probably also want an inverse function that would strip off the\ndecoration and produce a numeric, but that's easy too...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jun 1999 19:03:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions] "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Maybe we will have to add '$' symbols to a special case of the numeric\n> > type, or add a function to output numeric in money format?\n> \n> I like the last idea (add a formatting function), because it's simple,\n> self-contained, and doesn't force any solutions on anyone. Don't want\n> any decoration on your number? Just read it out. Don't like the\n> decoration added by the formatting function? Write your own function.\n> No table reconstruction required. With a data-type-driven approach,\n> changing your mind is painful because you have to rebuild your tables.\n> \n> We'd probably also want an inverse function that would strip off the\n> decoration and produce a numeric, but that's easy too...\n\nAdded to TODO:\n\n\t* Remove Money type, add money formatting for decimal type \n \nI should add I have reorganized the TODO list to be clearer. People may\nwant to check it out on our newly designed web site.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jun 1999 11:34:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money data type and conversions]"
}
] |
[
{
"msg_contents": "I was looking thru the postgresql message archive searching for a solution to my problem... which you articulated quit well... if you have max(key) of an empty table, the result set is { } which when added to anything gives you { } \n \ndo you know how make the result of max(key) of a empty table return 0 we have tried writing our oun c funtion :\n\n\nint\nnewmax (int arg,int arg2) {\n if (arg2==0)\n {\n return (arg2);\n }\n else\n {\n return(arg);\n }\n}\n \n \nwe call it :\n \nselect newmax(max({key}), Count({key})) which returns us { }\nif we call it w/ select newmax(3,0)) returns 0\nif we call it w/ select newmax(12,4)) returns 4\n \nit does not work on an empty table :(\n \nplease help...\n \n \n-Deva Vejay\[email protected]\n\n\n\n\n\n\n\n\n\nI was looking thru the postgresql message \narchive searching for a solution to my problem... which you articulated quit \nwell... if you have max(key) of an empty table, the result set is { \n} which when added to anything gives you { } \n \ndo you know how make the result of max(key) of a empty table \nreturn 0 we have tried writing our oun c funtion :\n \n \nintnewmax (int arg,int arg2) { if \n(arg2==0) { return \n(arg2); } else \n{ return(arg); \n}}\n \n \nwe call it :\n \nselect newmax(max({key}), Count({key})) which \nreturns us { }\nif we call it w/ select newmax(3,0)) returns 0\nif we call it \nw/ select newmax(12,4)) returns \n4\n \nit does not work on an empty table :(\n \nplease help...\n \n \n-Deva Vejay\[email protected]",
"msg_date": "Tue, 22 Jun 1999 14:17:12 -0400",
"msg_from": "\"Deva Vejay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSql HELP!"
}
] |
[
{
"msg_contents": "\nTALK TO THE WORLDS GREATEST PSYCHICS 1 800 592 7827\n",
"msg_date": "Tue, 22 Jun 1999 19:18:10",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "CERTIFIED GIFTED PSYCHICS"
}
] |
[
{
"msg_contents": "I've mentioned in the past that the fsynch following\nevery select, even when no data is modified, is a \nkiller for high-volume web sites that make many short,\nread-only hits on the database (for page customization,\nfor example).\n\nI know that fixing this is on the \"to do\" list. I've\nknown of the \"-F\" switch for some time, but the recent\nround of posts triggered by someone observing lots of\ndisk thrashing and the fact that I'm getting close to\ngoing online with my first round of web services based\non Postgres motivated me to give it a try.\n\nIt's very, very nice to have the disk silent when \nhitting it with a bunch of simultaneous \"selects\"\nfrom different http connections. It really increases\nthroughput, and is much, much kinder to the disk.\nThe difference for lots of short hits is very high.\n\nSo obviously I'm really looking forward to the day\nwhen a read-only select doesn't trigger a write to\npg_log (which apparently is the problem?) and an\n\"fsynch the world\" operation.\n\nIn the interim, just how dangerous is it to run with\n\"-F\"? \n\nAm I risking corruption of the db and a total rebuild,\nor will I just lose transactions but be left with a\nconsistent database if the machine goes down?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 22 Jun 1999 15:11:55 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "The dangers of \"-F\""
},
{
"msg_contents": "> So obviously I'm really looking forward to the day\n> when a read-only select doesn't trigger a write to\n> pg_log (which apparently is the problem?) and an\n> \"fsynch the world\" operation.\n> \n> In the interim, just how dangerous is it to run with\n> \"-F\"? \n> \n> Am I risking corruption of the db and a total rebuild,\n> or will I just lose transactions but be left with a\n> consistent database if the machine goes down?\n\nNo Fsync is only dangerous if your OS or hardware crashes without\nflushing the disk. Anything else is unaffected, and is just as reliable.\n\nThe database could be inconsistent, in the sense that partial\ntransactions are recorded as completed.\n\nI think it is a major issue too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 18:38:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "At 06:38 PM 6/22/99 -0400, Bruce Momjian wrote:\n\n>No Fsync is only dangerous if your OS or hardware crashes without\n>flushing the disk. Anything else is unaffected, and is just as reliable.\n\nYes, this much I realize...\n\n>The database could be inconsistent, in the sense that partial\n>transactions are recorded as completed.\n\nWith recovery possible without a rebuild? Or is rebuilding\nfrom dumps required? (I dump nightly and copy the results\nto a second machine for additional safety, and soon will\nbe ftp'ing dump files to the east coast for even more\nsafety). \n\nPerhaps fsync'ing then is only LESS dangerous, since\na system can crash while blocks are being written even\nwhen fsync is enabled. The window of evil opportunity\nfor a system crash is much smaller than if the data's sitting\naround for a lengthy time in the Linux FS cache, of course,\nbut not absent.\n\nOr does the fact that the backend loses control over the\norder in which stuff is written (in other words, blocks\nare written whenever and in what order Linux choses rather\nthan fsync'd a file at a time) mean that the kind of \ninconsistency that might result is different? I.E.\nlog file written before datablocks are, that kind of\nthing.\n\n>I think it is a major issue too.\n\nIs there any estimate of the difficulty of fixing it?\n>From previous discussions, it sounded as though new\nbookkeeping would be needed to determine which queries\nactually result in a change in data.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 22 Jun 1999 16:38:55 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "> At 06:38 PM 6/22/99 -0400, Bruce Momjian wrote:\n> \n> >No Fsync is only dangerous if your OS or hardware crashes without\n> >flushing the disk. Anything else is unaffected, and is just as reliable.\n> \n> Yes, this much I realize...\n> \n> >The database could be inconsistent, in the sense that partial\n> >transactions are recorded as completed.\n> \n> With recovery possible without a rebuild? Or is rebuilding\n> from dumps required? (I dump nightly and copy the results\n> to a second machine for additional safety, and soon will\n> be ftp'ing dump files to the east coast for even more\n> safety). \n\n\n> \n> Perhaps fsync'ing then is only LESS dangerous, since\n> a system can crash while blocks are being written even\n> when fsync is enabled. The window of evil opportunity\n> for a system crash is much smaller than if the data's sitting\n> around for a lengthy time in the Linux FS cache, of course,\n> but not absent.\n\nYes, this is true, but much less likely because the ordering of the\nflushing is done before the transaction is marked as completed.\n\n> \n> Or does the fact that the backend loses control over the\n> order in which stuff is written (in other words, blocks\n> are written whenever and in what order Linux choses rather\n> than fsync'd a file at a time) mean that the kind of \n> inconsistency that might result is different? I.E.\n> log file written before datablocks are, that kind of\n> thing.\n\nYes. It is not a problem that a give transaction aborts while it is\nbeing done because it couldn't have been marked as completed, but the\nprevious transaction was marked as completed, and only some blocks could\nbe on the disk.\n\n\n> \n> >I think it is a major issue too.\n> \n> Is there any estimate of the difficulty of fixing it?\n> >From previous discussions, it sounded as though new\n> bookkeeping would be needed to determine which queries\n> actually result in a change in data.\n\nI hope for every release. I tried to propose some solutions, but\ncouldn't code it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Jun 1999 20:36:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "At 08:36 PM 6/22/99 -0400, Bruce Momjian wrote:\n>> At 06:38 PM 6/22/99 -0400, Bruce Momjian wrote:\n>> \n>> >No Fsync is only dangerous if your OS or hardware crashes without\n>> >flushing the disk. Anything else is unaffected, and is just as reliable.\n>> \n>> Yes, this much I realize...\n>> \n>> >The database could be inconsistent, in the sense that partial\n>> >transactions are recorded as completed.\n>> \n>> With recovery possible without a rebuild? Or is rebuilding\n>> from dumps required? (I dump nightly and copy the results\n>> to a second machine for additional safety, and soon will\n>> be ftp'ing dump files to the east coast for even more\n>> safety). \n>\n>\n>> \n>> Perhaps fsync'ing then is only LESS dangerous, since\n>> a system can crash while blocks are being written even\n>> when fsync is enabled. The window of evil opportunity\n>> for a system crash is much smaller than if the data's sitting\n>> around for a lengthy time in the Linux FS cache, of course,\n>> but not absent.\n>\n>Yes, this is true, but much less likely because the ordering of the\n>flushing is done before the transaction is marked as completed.\n>\n>> \n>> Or does the fact that the backend loses control over the\n>> order in which stuff is written (in other words, blocks\n>> are written whenever and in what order Linux choses rather\n>> than fsync'd a file at a time) mean that the kind of \n>> inconsistency that might result is different? I.E.\n>> log file written before datablocks are, that kind of\n>> thing.\n>\n>Yes. It is not a problem that a give transaction aborts while it is\n>being done because it couldn't have been marked as completed, but the\n>previous transaction was marked as completed, and only some blocks could\n>be on the disk.\n>\n>\n>> \n>> >I think it is a major issue too.\n>> \n>> Is there any estimate of the difficulty of fixing it?\n>> >From previous discussions, it sounded as though new\n>> bookkeeping would be needed to determine which queries\n>> actually result in a change in data.\n>\n>I hope for every release. I tried to propose some solutions, but\n>couldn't code it.\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 22 Jun 1999 22:43:45 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "At 08:36 PM 6/22/99 -0400, Bruce Momjian wrote:\n\n>> \n>> Or does the fact that the backend loses control over the\n>> order in which stuff is written (in other words, blocks\n>> are written whenever and in what order Linux choses rather\n>> than fsync'd a file at a time) mean that the kind of \n>> inconsistency that might result is different? I.E.\n>> log file written before datablocks are, that kind of\n>> thing.\n\n>Yes. It is not a problem that a give transaction aborts while it is\n>being done because it couldn't have been marked as completed, but the\n>previous transaction was marked as completed, and only some blocks could\n>be on the disk.\n\nOK, this was what I suspected, and of course is the intuitively\nobvious scenario.\n\nIn other words, \"-F\" considered - and proven! - harmful :)\n\n>I hope for every release. I tried to propose some solutions, but\n>couldn't code it.\n\nThere was a bit of discussion about the cause of the problem\nin this list earlier, so part of my re-raising it was an attempt\nto encourage more discussion. Not that I know enough about the\ncode to be of any help, I'm afraid. When I first learned of\nthis problem (via my own experimentation) I dug around a bit\nand it became clear that it wasn't obvious. I.E. the disk\ncache knows about dirty/not dirty buffers and takes great\ncare to only flush dirty ones, that level of stuff. When I\nheard that updating pg_log was apparently involved I realized\nit was more of a higher-level than lower-level problem.\n\nSigh...\n\nOr am I wrong?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 22 Jun 1999 22:43:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "Is there any chance each database could be setup differently? Some of my databases are updated once a month (literally), while others are updated daily. It would be nice to have the -F setting on the read-mostly DBs...\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 23 Jun 1999 22:04:09 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Is there any chance each database could be setup differently? Some of\n> my databases are updated once a month (literally), while others are\n> updated daily. It would be nice to have the -F setting on the\n> read-mostly DBs...\n\nI don't think this is practical, because all the backends in a given\ninstallation will be sharing the same buffer cache and the same pg_log\nfile; you can't run some with -F and some without and expect to get\nthe behavior you want. Problem is that any of the backends might be\nthe one that writes out a particular disk block from cache.\n\nYou could run the two sets of databases as different installations\n(ie, two postmasters, two listen ports, two working directories)\nbut that'd require all your clients knowing which port to connect to\nfor each database; probably not worth the trouble.\n\nIn practice, if you have a reliable OS, reliable hardware, and a\nreliable power supply (read UPS), I think the risks introduced by\nrunning with -F are negligible compared to other sources of trouble\n(ie backend bugs)...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jun 1999 10:10:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\" "
},
{
"msg_contents": "> >Yes. It is not a problem that a give transaction aborts while it is\n> >being done because it couldn't have been marked as completed, but the\n> >previous transaction was marked as completed, and only some blocks could\n> >be on the disk.\n> \n> OK, this was what I suspected, and of course is the intuitively\n> obvious scenario.\n> \n> In other words, \"-F\" considered - and proven! - harmful :)\n> \n> >I hope for every release. I tried to propose some solutions, but\n> >couldn't code it.\n> \n> There was a bit of discussion about the cause of the problem\n> in this list earlier, so part of my re-raising it was an attempt\n> to encourage more discussion. Not that I know enough about the\n> code to be of any help, I'm afraid. When I first learned of\n> this problem (via my own experimentation) I dug around a bit\n> and it became clear that it wasn't obvious. I.E. the disk\n> cache knows about dirty/not dirty buffers and takes great\n> care to only flush dirty ones, that level of stuff. When I\n> heard that updating pg_log was apparently involved I realized\n> it was more of a higher-level than lower-level problem.\n> \n> Sigh...\n> \n> Or am I wrong?\n\nWriting the buffers to a file, and making sure they are on the disk are\ndifferent issues. Also, fsync only comes into play in an OS crash, so\nif that only happens once a year, and you are willing to restore from\ntape in that case (or check the integrity of the data on reboot), -F\nmay be fine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jun 1999 11:40:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "> Is there any chance each database could be setup differently?\n> Some of my databases are updated once a month (literally), while\n> others are updated daily. It would be nice to have the -F setting\n> on the read-mostly DBs...\n\nNot sure. pg_log is shared by all databases, so it would be hard.\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jun 1999 11:42:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "At 11:40 AM 6/23/99 -0400, Bruce Momjian wrote:\n\n>Writing the buffers to a file, and making sure they are on the disk are\n>different issues. Also, fsync only comes into play in an OS crash, so\n>if that only happens once a year, and you are willing to restore from\n>tape in that case (or check the integrity of the data on reboot), -F\n>may be fine.\n\nIronically, I ran all day yesterday with -F and my nightly\ndump failed on table \"foo\", \"couldn't read block 0\".\n\nI've seen this once before without use of -F so I think it's\nmere coincidence.\n\nI realize that writing buffers to a file and making sure they're\non disk are two different issues. My point is that without the\nfsynch, the backend loses control over the order in which blocks\nare written to the disk.\n\nFor instance, if there are assumptions that all data blocks are\nwritten before this fact is recorded in a log file, then\n\"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\nthat assumption, where \"write data blocks\" (no fsynch) \"write log\"\nmight, as the operating system's free to write the \"write log\"\nblocks to disk before any of the data blocks are (though an\nLRU algorithm most likely wouldn't). You could end up in a\ncase where the log records a successful write of data, without\nany data actually being on disk.\n\nI don't know how postgres works internally. So my question is\nreally \"are any such assumptions broken by the use of -F, and\ndoes breaking such assumptions lead to a more serious form\nof failure if there's a crash?\"\n\nI agree that the risks of running -F are low with reliable\nhardware and a UPS. I'm just trying to get a handle on just\nwhat a user might be facing in terms of corruption compared\nto a crash with fsynch'ing enabled. I can live with \"the\ndatabase might well become corrupted and you'll have to\nreload your latest dump\".\n\nMy current plan is to implement a set of queries that do\nfairly detailed consistency checks on my database every\nnight, before doing the nightly dump and copy to a second\nmachine, as well as each time I restart the web server\n(typically only after crashes). In this way I'll know\nquickly if any harm's been done after a crash, I'll have\nsome assurance the database is in good shape before dumps\n(my code, not just the backend, might have bugs!), etc.\n\n\n\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Wed, 23 Jun 1999 10:01:23 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "> I realize that writing buffers to a file and making sure they're\n> on disk are two different issues. My point is that without the\n> fsynch, the backend loses control over the order in which blocks\n> are written to the disk.\n\nYes, that is the problem. One solution is to fync all modified file\ndescriptors every ~30 seconds, then write and fsync pg_log(), so you\nonly do an fsync every 30 seconds. Of course you have to make sure\npg_log doesn't get put on disk until after all the file descriptors are\nfsync'ed. Of couse, you have a 30-second window of loss, but most file\nsystems do this every 30-seconds, so it is no less reliable than that. \n(Well, most OS's sync on file close, so you could say the file system is\nhas less loss.) Anyway, this is how most commercial db's do it. (One\neasy way to do it would be do issue a \"sync\" every 30 seconds to flush\nthe whole OS, but that seems a little extreme.)\n\n\n> For instance, if there are assumptions that all data blocks are\n> written before this fact is recorded in a log file, then\n> \"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\n> that assumption, where \"write data blocks\" (no fsynch) \"write log\"\n> might, as the operating system's free to write the \"write log\"\n> blocks to disk before any of the data blocks are (though an\n> LRU algorithm most likely wouldn't). You could end up in a\n> case where the log records a successful write of data, without\n> any data actually being on disk.\n> \n> I don't know how postgres works internally. So my question is\n> really \"are any such assumptions broken by the use of -F, and\n> does breaking such assumptions lead to a more serious form\n> of failure if there's a crash?\"\n\nIt is possible in an OS crash because we don't have any info about what\norder stuff is written to disk with -F.\n\n> I agree that the risks of running -F are low with reliable\n> hardware and a UPS. I'm just trying to get a handle on just\n> what a user might be facing in terms of corruption compared\n> to a crash with fsynch'ing enabled. I can live with \"the\n> database might well become corrupted and you'll have to\n> reload your latest dump\".\n> \n> My current plan is to implement a set of queries that do\n> fairly detailed consistency checks on my database every\n> night, before doing the nightly dump and copy to a second\n> machine, as well as each time I restart the web server\n> (typically only after crashes). In this way I'll know\n> quickly if any harm's been done after a crash, I'll have\n> some assurance the database is in good shape before dumps\n> (my code, not just the backend, might have bugs!), etc.\n> \n\nSounds like a good plan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Jun 1999 14:29:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "At 02:29 PM 6/23/99 -0400, Bruce Momjian wrote:\n\n>> I don't know how postgres works internally. So my question is\n>> really \"are any such assumptions broken by the use of -F, and\n>> does breaking such assumptions lead to a more serious form\n>> of failure if there's a crash?\"\n\n>It is possible in an OS crash because we don't have any info about what\n>order stuff is written to disk with -F.\n\nOK. This answers my question, thanks.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Wed, 23 Jun 1999 16:24:22 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
}
] |
[
{
"msg_contents": "\n\tGreetings,\n\tI'm working for the computer vision group at UMass's CS department and we are looking \nat using Postgres to catalog Images and video for a large data coordination project. I am \nrunning my own experiments, but I wanted to know if anyone has any data on internally stored \nversus externally stored images.\n\t\n\tCollin Lynch.\n\n",
"msg_date": "Wed, 23 Jun 1999 11:33:49 -0400 (EDT)",
"msg_from": "\"Collin F. Lynch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Images"
},
{
"msg_contents": "We have a commercial media management system in development\n(close to finished) using postgresql running under Linux written in Java.\n\nOur philosophy has been to store the media as a file and just have\nthe pathname to the media item in the database. While I like the\ntheoretical side of storing media in the database and I am sure that\nis how things will be done years ahead, the reality is that there are\nmany utility type programs (such as imagemagik for us) that can do\nlots of things with media, but they all make today's assumption that\nthe data is available in a file. If, for example, you want to convert an\nimage from one format to another, if the content is in the database,\nfirst you'd have to extract it into a file, then convert it and them\n(probably)\nput it back again. It creates a lot of work.\n\nJust my 0.02.\n\nWhat is the scope and timetable of your project? We might be\ninterested in donating to your project if it fits the non-profit\nmodel, though our product is not open source.\n\n>\n> Greetings,\n> I'm working for the computer vision group at UMass's CS department and we\nare looking\n> at using Postgres to catalog Images and video for a large data\ncoordination project. I am\n> running my own experiments, but I wanted to know if anyone has any data on\ninternally stored\n> versus externally stored images.\n>\n> Collin Lynch.\n>\n\n\n",
"msg_date": "Wed, 23 Jun 1999 11:15:43 -0500",
"msg_from": "\"Frank Morton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Images"
},
{
"msg_contents": "Collin,\n\nWe have a document system that has approx 270,000 tif images. These can\nbe quite large (500k is not uncommon I think we now have over 50gb of\nimages). We keep these out of the dbms and in separate files (dbms just\nhas filename) for several reasons.\n\n1. We can have the same image in multiple formats if required.\n\n2. We can distribute the images and dbms onto different disks or even\ndifferent servers (ie clients get data from one server and images from\nanother). In our latest multi-tier application again we get better speed\nby accessing the images and data from different servers.\n\n3. If the db is not available at least the images can be viewed (NB as\nwe are currently forced to run a db on a Windows NT server this is sadly\nthe case far too often).\n\n4. We are able to cache/mirror the images around the wan so that users\npick images up from a more local copy reducing bandwidth requirements\n\n5. Backing up is simpler as the database is much smaller and the images\nare readonly\n\n6. We can have multiple dbms and/or applications easily accessing the\nimages if required.\n\n\"Collin F. Lynch\" wrote:\n> \n> Greetings,\n> I'm working for the computer vision group at UMass's CS department and we are looking\n> at using Postgres to catalog Images and video for a large data coordination project. I am\n> running my own experiments, but I wanted to know if anyone has any data on internally stored\n> versus externally stored images.\n> \n> Collin Lynch.\n\n-- \nDavid Warnock\nSundayta Ltd\n",
"msg_date": "Wed, 23 Jun 1999 17:50:02 +0100",
"msg_from": "David Warnock <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Images"
}
] |
[
{
"msg_contents": "\tGreetings.\n\tAllthough we are not working on it immediatelyt, I am looking for a way to connect \nPostgres to Arc/info Intergraph and other GIS systems. Has anyone done any work in this area or \ncan point me to the proper links?\n\t\n\tCollin Lynch.\n",
"msg_date": "Wed, 23 Jun 1999 12:50:21 -0400 (EDT)",
"msg_from": "\"Collin F. Lynch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ARC/Info and Intergraph"
},
{
"msg_contents": "On Wed, 23 Jun 1999, Collin F. Lynch wrote:\n\n> \tGreetings.\n> \tAllthough we are not working on it immediatelyt, I am looking\n> for a way to connect Postgres to Arc/info Intergraph and other GIS\n> systems. Has anyone done any work in this area or can point me to the\n> proper links?\n\nI tried a few weeks ago to get MapInfo to connect to PostgreSQL using the\nODBC driver. The problem is that MapInfo sends all column names in double\nquotes, but doesn't quote them in the WHERE clauses. Because PostgreSQL\nonly keeps the column name cases when quotes are used, and MapInfo sends\neverything in uppercase, it treats them as different, so it fails.\n\nI hadn't had chance to look at this since, although it would be useful for\nwork.\n\nI don't know about Arc/info, but at some point I'm going to take a look at\nGrass.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Sat, 26 Jun 1999 11:41:42 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] ARC/Info and Intergraph"
}
] |
[
{
"msg_contents": "Is anyone else having any problems with the Perl Interface wrt Large\nObjects under 6.5?\n\nI compiled and installed 6.5 on an Intel/Linux/RedHat 6.0 machine that had\npreviously had 6.4.2 and imported existing data from the old database. \nPerl scripts which were working under the previous version are now failing\nwhen trying to open a newly created large object. A line appears in the\nerror log:\n\nJun 23 17:40:47 www logger: ERROR: lo_lseek: invalid large obj descriptor (0)\n\nWhile the code being executed is a function call write_blob below. The\nscaffolding internally tells me it is unable to open oid XXXXXX for\nwriting where XXXXXX is the newly \"created\" oid #.\n\nsub write_blob {\n my($oid, $blob) = @_;\n\n print \"write_blob($oid, '$blob');\\n\" if $debug;\n if ($blob eq \"\") {\n if ($oid > 0) {\n $conn->lo_unlink($oid);\n }\n print \"No blob to write\\n\" if $debug;\n return \"NULL\";\n }\n if ($oid == 0) {\n $oid = $conn->lo_creat(PGRES_INV_WRITE | PGRES_INV_READ);\n if ($oid == PGRES_InvalidOid) {\n print \"Unable to get new oid.\\n\" if $debug;\n return \"NULL\";\n }\n }\n my($lobj_fd) = $conn->lo_open($oid, PGRES_INV_WRITE);\n if ($lobj_fd == -1) {\n print \"Unable to open oid $oid for writing.\\n\" if $debug;\n return \"NULL\";\n }\n\n if ($conn->lo_write($lobj_fd, $blob, length($blob)) == -1) {\n $conn->lo_close($lobj_fd);\n $conn->lo_unlink($oid);\n print \"Unable to write blob into open oid $oid.\\n\" if $debug;\n return \"NULL\";\n }\n $conn->lo_close($lobj_fd);\n\n print \"write_blob successful\\n\" if $debug;\n\n return $oid;\n}\n\nI reverted to 6.4.2 and the scripts worked again. Back to 6.5 - no dice.\n\n- K\n\nKristofer Munn * http://www.munn.com/~kmunn/ * ICQ# 352499 * AIM: KrMunn \n\n\n",
"msg_date": "Wed, 23 Jun 1999 18:20:16 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
},
{
"msg_contents": "Kristofer Munn <[email protected]> writes:\n> Is anyone else having any problems with the Perl Interface wrt Large\n> Objects under 6.5?\n\n> I compiled and installed 6.5 on an Intel/Linux/RedHat 6.0 machine that had\n> previously had 6.4.2 and imported existing data from the old database. \n> Perl scripts which were working under the previous version are now failing\n> when trying to open a newly created large object. A line appears in the\n> error log:\n\n> Jun 23 17:40:47 www logger: ERROR: lo_lseek: invalid large obj descriptor (0)\n\n6.5 enforces the requirement that LO objects be used inside a\ntransaction. Prior versions did not enforce this ... they just didn't\nwork very reliably if the lifetime of an LO FD wasn't encased in\nbegin/commit :-(. I suppose you had managed to get away with it,\nbut you'd be much better off adding the begin/commit even for 6.4.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jun 1999 19:38:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem "
},
{
"msg_contents": "On Wed, 23 Jun 1999, Tom Lane wrote:\n\n> 6.5 enforces the requirement that LO objects be used inside a\n> transaction. . . . [remainder clipped]\n\nAha. And the implication is then that all large object operations\n(creation, deletion and modification) are affected by rollbacks and\ncommits. As they should be.\n\nI will add the transaction code. Thanks...\n\n- K\n\nKristofer Munn * http://www.munn.com/~kmunn/ * ICQ# 352499 * AIM: KrMunn \n\n",
"msg_date": "Wed, 23 Jun 1999 20:09:49 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
},
{
"msg_contents": "I wrapped the large object functions in a transaction and they worked. A\nside note (for the archives) is that even the reads need to be wrapped in\na transaction.\n\nThanks again...\n\n- K\n\nKristofer Munn * http://www.munn.com/~kmunn/ * ICQ# 352499 * AIM: KrMunn \n\n",
"msg_date": "Thu, 24 Jun 1999 11:28:14 -0400 (EDT)",
"msg_from": "Kristofer Munn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
},
{
"msg_contents": "Kristofer Munn wrote:\n> \n> Is anyone else having any problems with the Perl Interface wrt Large\n> Objects under 6.5?\n> \n\n\nyes, me too.\n\nWait for the next version of DDB-Pg.\n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Thu, 24 Jun 1999 21:07:15 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
},
{
"msg_contents": "Kristofer Munn wrote:\n> \n> I wrapped the large object functions in a transaction and they worked. A\n> side note (for the archives) is that even the reads need to be wrapped in\n> a transaction.\n> \n> Thanks again...\n> \n> - K\n> \n> Kristofer Munn * http://www.munn.com/~kmunn/ * ICQ# 352499 * AIM: KrMunn\n\n\nHmmm, interesting. But using plain old C (pgsql/test/examples/testlo.c) \nit works without transactions. \n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Thu, 24 Jun 1999 21:19:46 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
},
{
"msg_contents": "Edmund Mergl <[email protected]> writes:\n> Hmmm, interesting. But using plain old C (pgsql/test/examples/testlo.c) \n> it works without transactions. \n\nWith 6.5? I don't think so ... I made sure that LO FDs would be\ncancelled at transaction commit --- which is end of statement, if\nyou are not within a transaction ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jun 1999 19:36:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Edmund Mergl <[email protected]> writes:\n> > Hmmm, interesting. But using plain old C (pgsql/test/examples/testlo.c)\n> > it works without transactions.\n> \n> With 6.5? I don't think so ... I made sure that LO FDs would be\n> cancelled at transaction commit --- which is end of statement, if\n> you are not within a transaction ...\n> \n> regards, tom lane\n\n\nyes, you're right, Accidentally I used the files from 6.4.\n\nEdmund\n\n-- \nEdmund Mergl\nmailto:[email protected]\nhttp://www.bawue.de/~mergl\n",
"msg_date": "Fri, 25 Jun 1999 05:45:48 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Perl 5 Interface on 6.5 and lo_creat/lo_open problem"
}
] |
[
{
"msg_contents": "\n> For instance, if there are assumptions that all data blocks are\n> written before this fact is recorded in a log file, then\n> \"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\n> that assumption, \n> \nAre we really doing a sync after the pg_log write ? While the sync\nafter datablock write seems necessary to guarantee consistency,\nthe sync after log write is actually not necessary to guarantee consistency.\nWould it be a first step, to special case the writing to pg_log, as\nto not sync (extra switch to backend) ? This would avoid the syncs\nfor read only transactions, since they don't cause data block writes.\n\nAndreas\n",
"msg_date": "Thu, 24 Jun 1999 10:07:35 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "> \n> > For instance, if there are assumptions that all data blocks are\n> > written before this fact is recorded in a log file, then\n> > \"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\n> > that assumption, \n> > \n> Are we really doing a sync after the pg_log write ? While the sync\n> after datablock write seems necessary to guarantee consistency,\n> the sync after log write is actually not necessary to guarantee consistency.\n> Would it be a first step, to special case the writing to pg_log, as\n> to not sync (extra switch to backend) ? This would avoid the syncs\n> for read only transactions, since they don't cause data block writes.\n\nYou are right. We don't need a sync after the pg_log write.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jun 1999 11:42:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > > For instance, if there are assumptions that all data blocks are\n> > > written before this fact is recorded in a log file, then\n> > > \"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\n> > > that assumption,\n> > >\n> > Are we really doing a sync after the pg_log write ? While the sync\n> > after datablock write seems necessary to guarantee consistency,\n> > the sync after log write is actually not necessary to guarantee consistency.\n> > Would it be a first step, to special case the writing to pg_log, as\n> > to not sync (extra switch to backend) ? This would avoid the syncs\n> > for read only transactions, since they don't cause data block writes.\n> \n> You are right. We don't need a sync after the pg_log write.\n\nWe need. I agreed with extra switch to backend.\n\nVadim\n",
"msg_date": "Fri, 25 Jun 1999 09:18:59 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > >\n> > > > For instance, if there are assumptions that all data blocks are\n> > > > written before this fact is recorded in a log file, then\n> > > > \"write data blocks\" \"fsynch\" \"write log\" \"fsynch\" doesn't break\n> > > > that assumption,\n> > > >\n> > > Are we really doing a sync after the pg_log write ? While the sync\n> > > after datablock write seems necessary to guarantee consistency,\n> > > the sync after log write is actually not necessary to guarantee consistency.\n> > > Would it be a first step, to special case the writing to pg_log, as\n> > > to not sync (extra switch to backend) ? This would avoid the syncs\n> > > for read only transactions, since they don't cause data block writes.\n> > \n> > You are right. We don't need a sync after the pg_log write.\n> \n> We need. I agreed with extra switch to backend.\n\nWe need the switch only so was can \"guarentee\" that we can restore up\nuntil 30 seconds before crash. Without fsync of pg_log, we are waiting\nfor the OS to do the sync, and that will add at most another 30 seconds\nof open time(OS's sync every 30 seconds, usually). One nice thing I\nthink will be than an independent process will be doing the fsync, so no\nqueries will have to wait for it to happen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 25 Jun 1999 08:59:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
}
] |
[
{
"msg_contents": "Hi,\n\n In ecpg, the error occurs in value lists of the INSERT statement,\nwhen 'short' or 'unsigned short' host variables are used.\n\n1. Program sample\n\n exec sql begin declare section;\n short s ;\n unsigned short us;\n exec sql end declare section;\n exec sql create table test(s smallint, us smallint);\n exec sql commit;\n s = 1; us =32000;\n exec sql insert into test values( :s, :us ) ; <== error\n\n2. Error messege\n\n Following error message are output. \n \"i4toi2: '-600309759' causes int2 underflow\"\n\n3. Patch\n\n The error does not occur, when following patches were applied. \nIs this patch right? please confirm it. \n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nPersonal Software, Inc. Osaka Japan\n\n\n*** postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c.orig\tWed Jun 24 15:21:30 1999\n--- postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c\tWed Jun 24 15:31:57\n1999\n***************\n*** 469,480 ****\n--- 469,488 ----\n \t\t\tswitch (var->type)\n \t\t\t{\n \t\t\t\tcase ECPGt_short:\n+ \t\t\t\t\tsprintf(buff, \"%d\", *(short *) var->value);\n+ \t\t\t\t\ttobeinserted = buff;\n+ \t\t\t\t\tbreak;\n+ \n \t\t\t\tcase ECPGt_int:\n \t\t\t\t\tsprintf(buff, \"%d\", *(int *) var->value);\n \t\t\t\t\ttobeinserted = buff;\n \t\t\t\t\tbreak;\n \n \t\t\t\tcase ECPGt_unsigned_short:\n+ \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned short *) var->value);\n+ \t\t\t\t\ttobeinserted = buff;\n+ \t\t\t\t\tbreak;\n+ \n \t\t\t\tcase ECPGt_unsigned_int:\n \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned int *) var->value);\n \t\t\t\t\ttobeinserted = buff;\n\n",
"msg_date": "Thu, 24 Jun 1999 17:57:01 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT VALUES error in ecpg."
},
{
"msg_contents": "\nThis patch looks good. Comments?\n\n\n> Hi,\n> \n> In ecpg, the error occurs in value lists of the INSERT statement,\n> when 'short' or 'unsigned short' host variables are used.\n> \n> 1. Program sample\n> \n> exec sql begin declare section;\n> short s ;\n> unsigned short us;\n> exec sql end declare section;\n> exec sql create table test(s smallint, us smallint);\n> exec sql commit;\n> s = 1; us =32000;\n> exec sql insert into test values( :s, :us ) ; <== error\n> \n> 2. Error messege\n> \n> Following error message are output. \n> \"i4toi2: '-600309759' causes int2 underflow\"\n> \n> 3. Patch\n> \n> The error does not occur, when following patches were applied. \n> Is this patch right? please confirm it. \n> \n> --\n> Regards.\n> \n> SAKAIDA Masaaki <[email protected]>\n> Personal Software, Inc. Osaka Japan\n> \n> \n> *** postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c.orig\tWed Jun 24 15:21:30 1999\n> --- postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c\tWed Jun 24 15:31:57\n> 1999\n> ***************\n> *** 469,480 ****\n> --- 469,488 ----\n> \t\t\tswitch (var->type)\n> \t\t\t{\n> \t\t\t\tcase ECPGt_short:\n> + \t\t\t\t\tsprintf(buff, \"%d\", *(short *) var->value);\n> + \t\t\t\t\ttobeinserted = buff;\n> + \t\t\t\t\tbreak;\n> + \n> \t\t\t\tcase ECPGt_int:\n> \t\t\t\t\tsprintf(buff, \"%d\", *(int *) var->value);\n> \t\t\t\t\ttobeinserted = buff;\n> \t\t\t\t\tbreak;\n> \n> \t\t\t\tcase ECPGt_unsigned_short:\n> + \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned short *) var->value);\n> + \t\t\t\t\ttobeinserted = buff;\n> + \t\t\t\t\tbreak;\n> + \n> \t\t\t\tcase ECPGt_unsigned_int:\n> \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned int *) var->value);\n> \t\t\t\t\ttobeinserted = buff;\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 21:57:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT VALUES error in ecpg."
},
{
"msg_contents": "I have applied this patch, and it will appear in 6.5.1.\n\n\n> Hi,\n> \n> In ecpg, the error occurs in value lists of the INSERT statement,\n> when 'short' or 'unsigned short' host variables are used.\n> \n> 1. Program sample\n> \n> exec sql begin declare section;\n> short s ;\n> unsigned short us;\n> exec sql end declare section;\n> exec sql create table test(s smallint, us smallint);\n> exec sql commit;\n> s = 1; us =32000;\n> exec sql insert into test values( :s, :us ) ; <== error\n> \n> 2. Error messege\n> \n> Following error message are output. \n> \"i4toi2: '-600309759' causes int2 underflow\"\n> \n> 3. Patch\n> \n> The error does not occur, when following patches were applied. \n> Is this patch right? please confirm it. \n> \n> --\n> Regards.\n> \n> SAKAIDA Masaaki <[email protected]>\n> Personal Software, Inc. Osaka Japan\n> \n> \n> *** postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c.orig\tWed Jun 24 15:21:30 1999\n> --- postgresql-6.5/src/interfaces/ecpg/lib/ecpglib.c\tWed Jun 24 15:31:57\n> 1999\n> ***************\n> *** 469,480 ****\n> --- 469,488 ----\n> \t\t\tswitch (var->type)\n> \t\t\t{\n> \t\t\t\tcase ECPGt_short:\n> + \t\t\t\t\tsprintf(buff, \"%d\", *(short *) var->value);\n> + \t\t\t\t\ttobeinserted = buff;\n> + \t\t\t\t\tbreak;\n> + \n> \t\t\t\tcase ECPGt_int:\n> \t\t\t\t\tsprintf(buff, \"%d\", *(int *) var->value);\n> \t\t\t\t\ttobeinserted = buff;\n> \t\t\t\t\tbreak;\n> \n> \t\t\t\tcase ECPGt_unsigned_short:\n> + \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned short *) var->value);\n> + \t\t\t\t\ttobeinserted = buff;\n> + \t\t\t\t\tbreak;\n> + \n> \t\t\t\tcase ECPGt_unsigned_int:\n> \t\t\t\t\tsprintf(buff, \"%d\", *(unsigned int *) var->value);\n> \t\t\t\t\ttobeinserted = buff;\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Jul 1999 22:25:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT VALUES error in ecpg."
}
] |
[
{
"msg_contents": "Hi all,\n\nI have been using PostgreSQL 6.4.2 on Debain GNU/Linux\nfor a few months. I am using the Windows ODBC drivers\nand a C++/MFC client program.\nI fixed a few bugs in the ODBC drivers and now\neverything works perfectly.\n\nNow the downside.\nUnfortunately our current client insists on using NT not Linux.\n\nI have installed PostgreSQL 6.5 on cygwin 20.1 on NT 4.0\n(See \"Install log\" below for exact versions of the various components.)\nI am having freeze up problems. I recompiled with both\n-g and -O2 turned off in the vain that hope it might help.\n\"postmaster -i -d 3\" and \"postmaster -i -o -F\" both\nexhibit intermittent freeze ups when a new backend starts.\n\nIn LOG1 below I started the postmaster, connected from\npsql on my linux box, then quit straight away. Works fine.\n\nLOG2 below shows one successful connection followed by\none the freezes at InitPostgres. There were dozens of\ngood connections before the one shown here.\n\nI restarted \"postmaster -i\" (no debug messages) and\nwas able to run 8 clients simultaneously producing\nabout 8 connections per second. This was stable.\nI tried it again a few times and it mostly worked\nbut froze sometimes.\n\nRunning \"postmaster -i -o -F\" (no fsync) brought back\nthe instability.\n\nI noticed that after a freeze I would be less likely to\nfreeze again if I restarted the ipc-daemon. At one point\nthe ipc daemon was using 25% cpu after I had killed all\nthe crashed backends.\n\nMy client opens and closes the connection to the back\nend allot. I tried altering it to keep the connections\nopen (not feasible for our release version) and this\nmade things more stable due to there being fewer\nconnections. So the query handling seems ok, the\nfreeze only happens when a new backend starts.\n\nCan anyone suggest anything that I might do to fix this?\nI realize PG on NT is a young port. I may be able to spend\na little time on this myself if someone can point me in\nthe right direction.\n\nThanks for your help\n\nSam O'Connor\n\n=========================================================\nInstall log:\n=========================================================\n\nInstalled: Microsoft Windows NT Workstation 4.0.1381 SP5\nInstalled: Cygwin Beta 20.1 (full.exe)\n /sw\n made /bin with sh.exe\n made /tmp\n made etc with passwd & group\n ln -s cygtclsh80 tclsh80 tclsh\nInstalled: Andy Pipers /usr/local for Cygwin B20\n added /usr/local/bin to path in cygnus.bat\nInstalled: EGCS 1.1.2 release for Cygwin B20.1\n gcc -v : gcc version egcs-2.91.66 19990314 (egcs-1.1.2\nrelease)\nInstalled: IPC for Cygwin32 1.03\n added shortcut to ipc-daemon.exe to Startup folder\nUntared: postgresql-6.5.tar.gz in /usr/local/src\n copied src/win32/*.h into Cygwin include directories\n set CFLAGS to \"\" (no -O2 or -g) in src/template/cygwin32\n ./configure\n make\n mkdir /usr/local/pgsql\n mkdir /usr/local/pgsql/data\n make install\n set PGLIB, PGDATA, PATH, USER\n initdb\n added to data/ph_hba.conf: host all 10.0.0.0 255.255.255.0\ntrust\n postmaster -i -d 3\n\nInstalled: PostgreSQL ODBC Driver 6.40.0006\n\n==================================================\nLOG1 successful backend log: postmaster -i -d 3\n==================================================\n\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\nbinding ShmemCreate(key=52e2c1, size=1073152)\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing\n9\n/usr/local/pgsql/bin/postmaster: BackendStartup: environ dump:\n-----------------------------------------\n !C:=C:\\WINNT\\Profiles\\Administrator\\Desktop\n COMPUTERNAME=WORM\n COMSPEC=C:\\WINNT\\system32\\cmd.exe\n HOMEDRIVE=C:\n HOMEPATH=\\\n HOSTNAME=worm\n HOSTTYPE=i586\n INCLUDE=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\atl\\include;C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\include;\nC:\\Program Files\\Microsoft Visual Studio\\VC98\\include\n LIB=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\lib;C:\\Program Files\\Microsoft Visual Studio\\VC98\\lib\n LOGONSERVER=\\\\WORM\n MACHTYPE=i586-pc-cygwin32\n MAKE_MODE=UNIX\n MSDEVDIR=C:\\Program Files\\Microsoft Visual Studio\\Common\\MSDev98\n NUMBER_OF_PROCESSORS=1\n OS2LIBPATH=C:\\WINNT\\system32\\os2\\dll;\n OS=Windows_NT\n OSTYPE=cygwin32\n \nPATH=/usr/local/pgsql/bin:/usr/local/pgsql/bin:/sw/CYGWIN~1/H-I586~1/bin:/usr/local/bin:/WINNT/system32:/WINNT:/Program\nFile\ns/Microsoft Visual Studio/Common/Tools/WinNT:/Program Files/Microsoft\nVisual Studio/Common/MSDev98/Bin:/Program Files/Microsoft Visu\nal Studio/Common/Tools:/Program Files/Microsoft Visual Studio/VC98/bin\n PATHEXT=.COM;.EXE;.BAT;.CMD\n PGDATA=/usr/local/pgsql/data\n PGHOME=/usr/local/pgsql\n PGLIB=/usr/local/pgsql/lib\n PROCESSOR_ARCHITECTURE=x86\n PROCESSOR_IDENTIFIER=x86 Family 6 Model 0 Stepping 0,\nCyrixInstead\n PROCESSOR_LEVEL=6\n PROCESSOR_REVISION=0000\n PROMPT=$P$G\n PWD=/usr/local/pgsql/data\n SHELL=/bin/sh\n SHLVL=1\n SYSTEMDRIVE=C:\n SYSTEMROOT=C:\\WINNT\n TEMP=C:\\TEMP\n TERM=cygwin\n TMP=C:\\TEMP\n USER=administrator\n USERDOMAIN=WORM\n USERNAME=Administrator\n USERPROFILE=C:\\WINNT\\Profiles\\Administrator\n WINDIR=C:\\WINNT\n _=/usr/local/pgsql/bin/postmaster\n POSTPORT=5432\n POSTID=2147483647\n PG_USER=administrator\n IPC_KEY=5432000\n-----------------------------------------\n/usr/local/pgsql/bin/postmaster child[2528]: starting with\n(/usr/local/pgsql/bin/postgres -d3 -v131072 -p template1 )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = administrator\n RemoteHost = 10.0.0.100\n RemotePort = 5271\n DatabaseName = template1\n Verbose = 3\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\nInitPostgres\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 2528 user\nadministrator db template1 socket 9\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 2528 exited with\nstatus 0\n\n\n===============================================\nLOG2 failed backend log: postmaster -i -d 3\n-----------------------------------------------\nnote: \"Error semaphore semaphore not equal 0\"\n===============================================\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 2589 exited with\nstatus 0\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing\n9\n/usr/local/pgsql/bin/postmaster: BackendStartup: environ dump:\n-----------------------------------------\n !C:=C:\\WINNT\\Profiles\\Administrator\\Desktop\n COMPUTERNAME=WORM\n COMSPEC=C:\\WINNT\\system32\\cmd.exe\n HOMEDRIVE=C:\n HOMEPATH=\\\n HOSTNAME=worm\n HOSTTYPE=i586\n INCLUDE=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\atl\\include;C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\include;\nC:\\Program Files\\Microsoft Visual Studio\\VC98\\include\n LIB=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\lib;C:\\Program Files\\Microsoft Visual Studio\\VC98\\lib\n LOGONSERVER=\\\\WORM\n MACHTYPE=i586-pc-cygwin32\n MAKE_MODE=UNIX\n MSDEVDIR=C:\\Program Files\\Microsoft Visual Studio\\Common\\MSDev98\n NUMBER_OF_PROCESSORS=1\n OS2LIBPATH=C:\\WINNT\\system32\\os2\\dll;\n OS=Windows_NT\n OSTYPE=cygwin32\n \nPATH=/usr/local/pgsql/bin:/usr/local/pgsql/bin:/sw/CYGWIN~1/H-I586~1/bin:/usr/local/bin:/WINNT/system32:/WINNT:/Program\nFile\ns/Microsoft Visual Studio/Common/Tools/WinNT:/Program Files/Microsoft\nVisual Studio/Common/MSDev98/Bin:/Program Files/Microsoft Visu\nal Studio/Common/Tools:/Program Files/Microsoft Visual Studio/VC98/bin\n PATHEXT=.COM;.EXE;.BAT;.CMD\n PGDATA=/usr/local/pgsql/data\n PGHOME=/usr/local/pgsql\n PGLIB=/usr/local/pgsql/lib\n PROCESSOR_ARCHITECTURE=x86\n PROCESSOR_IDENTIFIER=x86 Family 6 Model 0 Stepping 0,\nCyrixInstead\n PROCESSOR_LEVEL=6\n PROCESSOR_REVISION=0000\n PROMPT=$P$G\n PWD=/usr/local/pgsql/data\n SHELL=/bin/sh\n SHLVL=1\n SYSTEMDRIVE=C:\n SYSTEMROOT=C:\\WINNT\n TEMP=C:\\TEMP\n TERM=cygwin\n TMP=C:\\TEMP\n USER=administrator\n USERDOMAIN=WORM\n USERNAME=Administrator\n USERPROFILE=C:\\WINNT\\Profiles\\Administrator\n WINDIR=C:\\WINNT\n _=/usr/local/pgsql/bin/postmaster\n POSTPORT=5432\n POSTID=2147483614\n PG_USER=administrator\n IPC_KEY=5432000\n-----------------------------------------\n/usr/local/pgsql/bin/postmaster child[2590]: starting with\n(/usr/local/pgsql/bin/postgres -d3 -v131072 -p mentor )\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 2590 user\nadministrator db mentor socket 9\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = administrator\n RemoteHost = 127.0.0.1\n RemotePort = 1084\n DatabaseName = mentor\n Verbose = 3\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\nInitPostgres\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nError semaphore semaphore not equal 0\nStartTransactionCommand\nquery: set DateStyle to 'ISO'\nProcessUtility: set DateStyle to 'ISO'\nCommitTransactionCommand\nStartTransactionCommand\nquery: set geqo to 'OFF'\nProcessUtility: set geqo to 'OFF'\nCommitTransactionCommand\nStartTransactionCommand\nquery: set ksqo to 'ON'\nProcessUtility: set ksqo to 'ON'\nCommitTransactionCommand\nStartTransactionCommand\nquery: select oid from pg_type where typname='lo'\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT * FROM released_document WHERE document_id = 2\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT popup_id, content FROM popup WHERE document_id = 2 AND\ndocument_version = 0 ORDER BY popup_id\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT link_id, type, vector FROM link WHERE document_id = 2 AND\ndocument_version = 0\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT section_number, name FROM section WHERE document_id = 2\nAND document_version = 0 ORDER BY section_number\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT text, textel_number, type FROM typed_textel WHERE\ndocument_id = 2 AND document_version = 0 AND section_number = 0 ORDE\nR BY textel_number\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT text, textel_number, type FROM typed_textel WHERE\ndocument_id = 2 AND document_version = 0 AND section_number = 1 ORDE\nR BY textel_number\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT text, textel_number, type FROM typed_textel WHERE\ndocument_id = 2 AND document_version = 0 AND section_number = 2 ORDE\nR BY textel_number\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: SELECT text, textel_number, type FROM typed_textel WHERE\ndocument_id = 2 AND document_version = 0 AND section_number = 3 ORDE\nR BY textel_number\nProcessQuery\nCommitTransactionCommand\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 2590 exited with\nstatus 0\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n9\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing\n9\n/usr/local/pgsql/bin/postmaster: BackendStartup: environ dump:\n-----------------------------------------\n !C:=C:\\WINNT\\Profiles\\Administrator\\Desktop\n COMPUTERNAME=WORM\n COMSPEC=C:\\WINNT\\system32\\cmd.exe\n HOMEDRIVE=C:\n HOMEPATH=\\\n HOSTNAME=worm\n HOSTTYPE=i586\n INCLUDE=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\atl\\include;C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\include;\nC:\\Program Files\\Microsoft Visual Studio\\VC98\\include\n LIB=C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\mfc\\lib;C:\\Program Files\\Microsoft Visual Studio\\VC98\\lib\n LOGONSERVER=\\\\WORM\n MACHTYPE=i586-pc-cygwin32\n MAKE_MODE=UNIX\n MSDEVDIR=C:\\Program Files\\Microsoft Visual Studio\\Common\\MSDev98\n NUMBER_OF_PROCESSORS=1\n OS2LIBPATH=C:\\WINNT\\system32\\os2\\dll;\n OS=Windows_NT\n OSTYPE=cygwin32\n \nPATH=/usr/local/pgsql/bin:/usr/local/pgsql/bin:/sw/CYGWIN~1/H-I586~1/bin:/usr/local/bin:/WINNT/system32:/WINNT:/Program\nFile\ns/Microsoft Visual Studio/Common/Tools/WinNT:/Program Files/Microsoft\nVisual Studio/Common/MSDev98/Bin:/Program Files/Microsoft Visu\nal Studio/Common/Tools:/Program Files/Microsoft Visual Studio/VC98/bin\n PATHEXT=.COM;.EXE;.BAT;.CMD\n PGDATA=/usr/local/pgsql/data\n PGHOME=/usr/local/pgsql\n PGLIB=/usr/local/pgsql/lib\n PROCESSOR_ARCHITECTURE=x86\n PROCESSOR_IDENTIFIER=x86 Family 6 Model 0 Stepping 0,\nCyrixInstead\n PROCESSOR_LEVEL=6\n PROCESSOR_REVISION=0000\n PROMPT=$P$G\n PWD=/usr/local/pgsql/data\n SHELL=/bin/sh\n SHLVL=1\n SYSTEMDRIVE=C:\n SYSTEMROOT=C:\\WINNT\n TEMP=C:\\TEMP\n TERM=cygwin\n TMP=C:\\TEMP\n USER=administrator\n USERDOMAIN=WORM\n USERNAME=Administrator\n USERPROFILE=C:\\WINNT\\Profiles\\Administrator\n WINDIR=C:\\WINNT\n _=/usr/local/pgsql/bin/postmaster\n POSTPORT=5432\n POSTID=2147483613\n PG_USER=administrator\n IPC_KEY=5432000\n-----------------------------------------\n/usr/local/pgsql/bin/postmaster child[2591]: starting with\n(/usr/local/pgsql/bin/postgres -d3 -v131072 -p mentor )\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 2591 user\nadministrator db mentor socket 9\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = administrator\n RemoteHost = 127.0.0.1\n RemotePort = 1085\n DatabaseName = mentor\n Verbose = 3\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\nInitPostgres\n\n===========================================================\nEnd of log\n===========================================================\n",
"msg_date": "Thu, 24 Jun 1999 19:38:57 +1000",
"msg_from": "\"Sam O'Connor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres on NT freezing"
},
{
"msg_contents": "> Hi all,\n> \n> I have been using PostgreSQL 6.4.2 on Debain GNU/Linux\n> for a few months. I am using the Windows ODBC drivers\n> and a C++/MFC client program.\n> I fixed a few bugs in the ODBC drivers and now\n> everything works perfectly.\n\nWow, that is strange. I wonder what that ipc thing is doing. I guess\nis it the emulation of Unix environment. You can't attach to the\nrunning postmaster and get a backtrace, can you?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jun 1999 11:46:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Postgres on NT freezing"
},
{
"msg_contents": "> > I fixed a few bugs in the ODBC drivers and now\n> > everything works perfectly.\n\nHmm, I wonder if anyone else would like everything working perfectly.\nAny patches for us (hint, hint)?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 24 Jun 1999 16:04:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PORTS] Postgres on NT freezing"
}
] |
[
{
"msg_contents": "Hi,\n\n I always execute 'regression test' and 'regression bigtest'\nwhen PostgreSQL was enhanced. However,'regression bigtest' needs\nthe very long processing time in PostgreSQL-6.5. In my computer, \nit is taken of about 1 hour. \n\n The reason why the processing time is long is because 1000 \ndigits are calculated using the 'LOG' and 'POWER' function. \n\n Actual statement in \"postgresql-6.5/src/test/regress/sql/\nnumeric_big.sql\" is the following.\n\n INSERT INTO num_result SELECT id, 0, POWER('10'::numeric,\n LN(ABS(round(val,1000)))) FROM num_data WHERE val != '0.0';\n\n\n But, the processing ends for a few minutes when this \n\"LN(ABS(round(val,1000)))\" is made to be \"LN(ABS(round(val,30)))\".\n\n INSERT or SELECT must be tested using the value of 1000 digits,\nbecause to handle NUMERIC and DECIMAL data type to 1000 digits is\npossible. \n\n However, I think that there is no necessity of calculating the \nvalue of 1000 digits in the 'LOG' function. \n\n Comments?\n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nPersonal Software, Inc. Osaka Japan\n\n",
"msg_date": "Thu, 24 Jun 1999 20:21:58 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "regression bigtest needs very long time"
},
{
"msg_contents": "> Hi,\n> \n> I always execute 'regression test' and 'regression bigtest'\n> when PostgreSQL was enhanced. However,'regression bigtest' needs\n> the very long processing time in PostgreSQL-6.5. In my computer, \n> it is taken of about 1 hour. \n> \n> The reason why the processing time is long is because 1000 \n> digits are calculated using the 'LOG' and 'POWER' function. \n> \n> Actual statement in \"postgresql-6.5/src/test/regress/sql/\n> numeric_big.sql\" is the following.\n> \n> INSERT INTO num_result SELECT id, 0, POWER('10'::numeric,\n> LN(ABS(round(val,1000)))) FROM num_data WHERE val != '0.0';\n> \n> \n> But, the processing ends for a few minutes when this \n> \"LN(ABS(round(val,1000)))\" is made to be \"LN(ABS(round(val,30)))\".\n> \n> INSERT or SELECT must be tested using the value of 1000 digits,\n> because to handle NUMERIC and DECIMAL data type to 1000 digits is\n> possible. \n> \n> However, I think that there is no necessity of calculating the \n> value of 1000 digits in the 'LOG' function. \n> \n\nnumeric/decimal is a new type for this release. I assume this extra\nprocessing will be removed once we are sure it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Jun 1999 11:47:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n> \n> > SAKAIDA wrote: \n> > However, I think that there is no necessity of calculating the \n> > value of 1000 digits in the 'LOG' function. \n> > \n> \n> numeric/decimal is a new type for this release. I assume this extra\n> processing will be removed once we are sure it works.\n\n Thank you for your reply. At the next version, I hope that\n'regression test/bigtest' ends in the short time. \n\n The patch as an example which I considered is the following.\nIf this patch is applied, the processing which requires 1.5 hours\nin the current ends for 5 minutes.\n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nOsaka, Japan\n\n\n*** postgresql-6.5/src/test/regress/sql/numeric.sql.orig\tFri Jun 11 02:49:31 1999\n--- postgresql-6.5/src/test/regress/sql/numeric.sql\tWed Jun 16 13:46:41 1999\n***************\n*** 626,632 ****\n -- * POWER(10, LN(value)) check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, POWER('10'::numeric, LN(ABS(round(val,300))))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n--- 626,632 ----\n -- * POWER(10, LN(value)) check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, POWER('10'::numeric, LN(ABS(round(val,30))))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n\n\n*** postgresql-6.5/src/test/regress/sql/numeric_big.sql.orig\tThu Jun 17 19:22:53 1999\n--- postgresql-6.5/src/test/regress/sql/numeric_big.sql\tThu Jun 17 19:27:36 1999\n***************\n*** 602,608 ****\n -- * Natural logarithm check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, LN(ABS(val))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n--- 602,608 ----\n -- * Natural logarithm check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, LN(round(ABS(val),30))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n***************\n*** 614,620 ****\n -- * Logarithm base 10 check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, LOG('10'::numeric, ABS(val))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n--- 614,620 ----\n -- * Logarithm base 10 check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, LOG('10'::numeric, round(ABS(val),30))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n***************\n*** 626,632 ****\n -- * POWER(10, LN(value)) check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, POWER('10'::numeric, LN(ABS(round(val,1000))))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n--- 626,632 ----\n -- * POWER(10, LN(value)) check\n -- ******************************\n DELETE FROM num_result;\n! INSERT INTO num_result SELECT id, 0, POWER('10'::numeric, LN(ABS(round(val,30))))\n FROM num_data\n WHERE val != '0.0';\n SELECT t1.id1, t1.result, t2.expected\n\n",
"msg_date": "Sat, 26 Jun 1999 13:41:07 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> wrote:\n> > \n> > > SAKAIDA wrote: \n> > > However, I think that there is no necessity of calculating the \n> > > value of 1000 digits in the 'LOG' function. \n> > > \n> > \n> > numeric/decimal is a new type for this release. I assume this extra\n> > processing will be removed once we are sure it works.\n> \n> Thank you for your reply. At the next version, I hope that\n> 'regression test/bigtest' ends in the short time. \n> \n> The patch as an example which I considered is the following.\n> If this patch is applied, the processing which requires 1.5 hours\n> in the current ends for 5 minutes.\n\nJust don't run bigtest. It is only for people who are having trouble\nwith the new numeric type.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 Jun 1999 11:51:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Just don't run bigtest. It is only for people who are having trouble\n> with the new numeric type.\n\nI don't mind too much that bigtest takes forever --- as you say,\nit shouldn't be run except by people who want a thorough test.\n\nBut I *am* unhappy that the regular numeric test takes much longer than\nall the other regression tests put together. That's an unreasonable\namount of effort spent on one feature, and it gets really annoying for\nsomeone like me who's in the habit of running the regress tests after\nany update. Is there anything this test is likely to catch that\nwouldn't get caught with a much narrower field width (say 10 digits\ninstead of 30)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Jun 1999 13:27:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Just don't run bigtest. It is only for people who are having trouble\n> > with the new numeric type.\n> \n> I don't mind too much that bigtest takes forever --- as you say,\n> it shouldn't be run except by people who want a thorough test.\n> \n> But I *am* unhappy that the regular numeric test takes much longer than\n> all the other regression tests put together. That's an unreasonable\n> amount of effort spent on one feature, and it gets really annoying for\n> someone like me who's in the habit of running the regress tests after\n> any update. Is there anything this test is likely to catch that\n> wouldn't get caught with a much narrower field width (say 10 digits\n> instead of 30)?\n\nOh, I didn't realize this. We certainly should think about reducing the\ntime spent on it, though it is kind of lame to be testing numeric in a\nprecision that is less than the standard int4 type.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 Jun 1999 14:57:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "SAKAIDA wrote:\n\n>\n> Bruce Momjian <[email protected]> wrote:\n> >\n> > > SAKAIDA wrote:\n> > > However, I think that there is no necessity of calculating the\n> > > value of 1000 digits in the 'LOG' function.\n> > >\n> >\n> > numeric/decimal is a new type for this release. I assume this extra\n> > processing will be removed once we are sure it works.\n>\n> Thank you for your reply. At the next version, I hope that\n> 'regression test/bigtest' ends in the short time.\n>\n> The patch as an example which I considered is the following.\n> If this patch is applied, the processing which requires 1.5 hours\n> in the current ends for 5 minutes.\n\n The test was intended to check the internal low level\n functions of the NUMERIC datatype against MANY possible\n values. That's the reason for the high precision resulting in\n this runtime. That was a wanted side effect, not a bug!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 28 Jun 1999 11:01:11 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Hi,\n\n> > > Bruce Momjian wrote:\n> > > Just don't run bigtest. It is only for people who are having trouble\n> > > with the new numeric type.\n> > \n> > Tom Lane wrote:\n> > I don't mind too much that bigtest takes forever --- as you say,\n> > it shouldn't be run except by people who want a thorough test.\n\n At the end of regression normal test, the following message \nis displayed.\n\n \"To run the optional huge test(s) too type 'make bigtest'\" \n\n Many users, especialy those who install \"PostgreSQL\" for the \nfirst time, may type 'make bigtest' and may feel the PostgreSQL\nunstable. Because the bigtest outputs no messages for long time \nand seems to be no executing at the point of numeric testing.\n\n Therefore, if it is not necessary for the general user to \nexecute \"regression bigtest\", I think that the message of \n'make bigtest' should be removed or that the message should be \nchanged like \"it takes several hours ....\".\n\n\n> > But I *am* unhappy that the regular numeric test takes much longer than\n> > all the other regression tests put together. That's an unreasonable\n> > amount of effort spent on one feature, and it gets really annoying for\n> > someone like me who's in the habit of running the regress tests after\n> > any update. \n\n I think so too.\n\n\n> > Is there anything this test is likely to catch that\n> > wouldn't get caught with a much narrower field width (say 10 digits\n> > instead of 30)?\n>\n> Bruce Momjian wrote:\n> Oh, I didn't realize this. We certainly should think about reducing the\n> time spent on it, though it is kind of lame to be testing numeric in a\n> precision that is less than the standard int4 type.\n\n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nOsaka, Japan\n\n",
"msg_date": "Mon, 28 Jun 1999 18:33:30 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Oh, I didn't realize this. We certainly should think about reducing the\n> time spent on it, though it is kind of lame to be testing numeric in a\n> precision that is less than the standard int4 type.\n\n We certainly should think about a general speedup of NUMERIC.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 28 Jun 1999 11:58:43 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > Oh, I didn't realize this. We certainly should think about reducing the\n> > time spent on it, though it is kind of lame to be testing numeric in a\n> > precision that is less than the standard int4 type.\n> \n> We certainly should think about a general speedup of NUMERIC.\n\nHow would we do that? I assumed it was already pretty optimized.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Jun 1999 14:30:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "> At the end of regression normal test, the following message \n> is displayed.\n> \n> \"To run the optional huge test(s) too type 'make bigtest'\" \n> \n> Many users, especialy those who install \"PostgreSQL\" for the \n> first time, may type 'make bigtest' and may feel the PostgreSQL\n> unstable. Because the bigtest outputs no messages for long time \n> and seems to be no executing at the point of numeric testing.\n> \n> Therefore, if it is not necessary for the general user to \n> execute \"regression bigtest\", I think that the message of \n> 'make bigtest' should be removed or that the message should be \n> changed like \"it takes several hours ....\".\n\nWarning added:\n\n\tThese big tests can take over an hour to complete\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Jun 1999 14:37:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": ">\n> > Bruce Momjian wrote:\n> >\n> > > Oh, I didn't realize this. We certainly should think about reducing the\n> > > time spent on it, though it is kind of lame to be testing numeric in a\n> > > precision that is less than the standard int4 type.\n> >\n> > We certainly should think about a general speedup of NUMERIC.\n>\n> How would we do that? I assumed it was already pretty optimized.\n\n By reimplementing the entire internals from scratch again :-)\n\n For now the db storage format is something like packed\n decimal. Two digits fit into one byte. Sign, scale and\n precision are stored in a header. For computations, this gets\n unpacked so every digit is stored in one byte and all the\n computations are performed on the digit level and base 10.\n\n Computers are good in performing computations in other bases\n (hex, octal etc.). And we can assume that any architecture\n where PostgreSQL can be installed supports 32 bit integers.\n Thus, a good choice for an internal base whould be 10000 and\n the digits(10000) stored in small integers.\n\n 1. Converting between decimal (base 10) and base 10000 is\n relatively simple. One digit(10000) holds 4 digits(10).\n\n 2. Computations using a 32 bit integer for carry/borrow are\n safe because the biggest result of a one digit(10000)\n add/subtract/multiply cannot exceed the 32 bits.\n\n The speedup (I expect) results from the fact that the inner\n loops of add, subtract and multiply will then handle 4\n decimal digits per cycle instead of one! Doing a\n\n 1234.5678 + 2345.6789\n\n then needs 2 internal cycles instead of 8. And\n\n 100.123 + 12030.12345\n\n needs 4 cycles instead of 10 (because the decimal point has\n the same meaning in base 10000 the last value is stored\n internally as short ints 1, 2030, 1234, 5000). This is the\n worst case and it still saved 60% of the innermost cycles!\n\n Rounding and checking for overflow will get a little more\n difficult, but I think it's worth the efford.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Jun 1999 12:48:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "> needs 4 cycles instead of 10 (because the decimal point has\n> the same meaning in base 10000 the last value is stored\n> internally as short ints 1, 2030, 1234, 5000). This is the\n> worst case and it still saved 60% of the innermost cycles!\n\nInteresting. How do other Db's do it internally? Anyone know?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 09:33:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> needs 4 cycles instead of 10 (because the decimal point has\n>> the same meaning in base 10000 the last value is stored\n>> internally as short ints 1, 2030, 1234, 5000). This is the\n>> worst case and it still saved 60% of the innermost cycles!\n\n> Interesting. How do other Db's do it internally? Anyone know?\n\nProbably the same way, if they want to be portable. What Jan is\ndescribing is a *real* standard technique (it's recommended in Knuth).\nAFAIK the only other way to speed up a digit-at-a-time implementation\nis to drop down to the assembly level and use packed-decimal\ninstructions ... if your machine has any ...\n\nOne thing worth thinking about is whether the storage format shouldn't\nbe made the same as the calculation format, so as to eliminate the\nconversion costs. At four decimal digits per int2, it wouldn't cost\nus anything to do so.\n\n\t\t\tregards, tom lane\n\nPS: BTW, Jan, if you do not have a copy of Knuth's volume 2, I'd\ndefinitely recommend laying your hands on it for this project.\nHis description of multiprecision arithmetic is the best I've seen\nanywhere.\n\nIf we thought that the math functions (sqrt, exp, etc) for numerics\nwere really getting used for anything, it might also be fun to try\nto put in some better algorithms for them. I've got a copy of Cody\nand Waite, which has been the bible for such things for twenty years.\nBut my guess is that it wouldn't be worth the trouble, except to the\nextent that it speeds up the regression tests ;-)\n",
"msg_date": "Tue, 29 Jun 1999 10:31:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time "
},
{
"msg_contents": "Tom Lane wrote:\n\n> One thing worth thinking about is whether the storage format shouldn't\n> be made the same as the calculation format, so as to eliminate the\n> conversion costs. At four decimal digits per int2, it wouldn't cost\n> us anything to do so.\n\n That's an extra bonus point from the described internal\n format.\n\n>\n> regards, tom lane\n>\n> PS: BTW, Jan, if you do not have a copy of Knuth's volume 2, I'd\n> definitely recommend laying your hands on it for this project.\n> His description of multiprecision arithmetic is the best I've seen\n> anywhere.\n\n I don't have so far - thanks for the hint.\n\n> If we thought that the math functions (sqrt, exp, etc) for numerics\n> were really getting used for anything, it might also be fun to try\n> to put in some better algorithms for them. I've got a copy of Cody\n> and Waite, which has been the bible for such things for twenty years.\n> But my guess is that it wouldn't be worth the trouble, except to the\n> extent that it speeds up the regression tests ;-)\n\n They are based on the standard Taylor/McLaurin definitions\n for those functions.\n\n Most times I need trigonometric functions or the like one of\n my sliderules still has enough precision because I'm unable\n to draw 0.1mm or more precise with a pencil on a paper. YES,\n I love to USE sliderules (I have a dozen now, some regular\n ones, some circular ones, some pocket sized and one circular\n pocket sized one that looks more like a stopwatch than a\n sliderule).\n\n Thus, usually the precision of float8 should be more than\n enough for those calculations. Making NUMERIC able to handle\n these functions in it's extreme precision shouldn't really be\n that time critical.\n\n Remember: The lack of mathematical knowledge never shows up\n better than in unappropriate precision of numerical\n calculations.\n C. F. Gauss\n (Sorry for the poor translation)\n\n What Gauss (born 1777 and the first who knew how to take the\n square root out of negative numbers) meant by that is, it is\n stupid to calculate with a precision of 10 or no digits after\n the decimal point if you're able to measure with 4 digits.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Jun 1999 18:01:16 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Hi,\n\[email protected] (Jan Wieck) wrote:\n>\n> Tom Lane wrote:\n> > If we thought that the math functions (sqrt, exp, etc) for numerics\n> > were really getting used for anything, it might also be fun to try\n> > to put in some better algorithms for them. I've got a copy of Cody\n> > and Waite, which has been the bible for such things for twenty years.\n> > But my guess is that it wouldn't be worth the trouble, except to the\n> > extent that it speeds up the regression tests ;-)\n>\n(snip)\n> \n> Thus, usually the precision of float8 should be more than\n> enough for those calculations. Making NUMERIC able to handle\n> these functions in it's extreme precision shouldn't really be\n> that time critical.\n> \n\n There are no problem concerning the NUMERIC test of INSERT/\nSELECT and add/subtract/multiply/division. The only problem is\nthe processing time.\n\n One solution which solves this problem is to change the argument \ninto *float8*. If the following changes are done, the processing \nwill become high-speed than a previous about 10 times. \n\n File :\"src/regress/sql/numeric.sql\"\n Statement:\"INSERT INTO num_result SELECT id, 0, \n POWER('10'::numeric,LN(ABS(round(val,300))) ...\"\n\n Change: \"LN(ABS(round(val,300))))\" \n to: \"LN(float8(ABS(round(va,300))))\"\n\n \n \n# Another solution is to automatically convert the argument of the \n LOG function into double precision data type in the *inside*. \n (But, I do not know what kind of effect will be caused by this \n solution.)\n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nOsaka, Japan\n\n",
"msg_date": "Wed, 30 Jun 1999 17:55:02 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "SAKAIDA Masaaki wrote:\n\n> There are no problem concerning the NUMERIC test of INSERT/\n> SELECT and add/subtract/multiply/division. The only problem is\n> the processing time.\n>\n> One solution which solves this problem is to change the argument\n> into *float8*. If the following changes are done, the processing\n> will become high-speed than a previous about 10 times.\n>\n> File :\"src/regress/sql/numeric.sql\"\n> Statement:\"INSERT INTO num_result SELECT id, 0,\n> POWER('10'::numeric,LN(ABS(round(val,300))) ...\"\n>\n> Change: \"LN(ABS(round(val,300))))\"\n> to: \"LN(float8(ABS(round(va,300))))\"\n>\n>\n>\n> # Another solution is to automatically convert the argument of the\n> LOG function into double precision data type in the *inside*.\n> (But, I do not know what kind of effect will be caused by this\n> solution.)\n\n The complex functions (LN, LOG, EXP, etc.) where added to\n NUMERIC for the case someone really needs higher precision\n than float8. The numeric_big test simply ensures that\n someone really get's the CORRECT result when computing a\n logarithm up to hundreds of digits. All the expected results\n fed into the tables are computed by scripts using bc(1) with\n a precision 200 digits higher than that used in the test\n itself. So I'm pretty sure NUMERIC returns a VERY GOOD\n approximation if I ask for the square root of 2 with 1000\n digits.\n\n One thing in mathematics that is silently forbidden is to\n present a result with digits that aren't significant! But it\n is the user to decide where the significance of his INPUT\n ends, not the database. So it is up to the user to decide\n when to loose precision by switching to float.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 30 Jun 1999 11:33:50 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Hi,\n\[email protected] (Jan Wieck) wrote:\n> \n> The complex functions (LN, LOG, EXP, etc.) where added to\n> NUMERIC for the case someone really needs higher precision\n> than float8. The numeric_big test simply ensures that\n> someone really get's the CORRECT result when computing a\n> logarithm up to hundreds of digits. All the expected results\n> fed into the tables are computed by scripts using bc(1) with\n> a precision 200 digits higher than that used in the test\n> itself. So I'm pretty sure NUMERIC returns a VERY GOOD\n> approximation if I ask for the square root of 2 with 1000\n> digits.\n\n I was able to understand the specification for the NUMERIC \ndata type. But, I can not yet understand the specification of \nthe regression normal test.\n\n File :\"src/regress/sql/numeric.sql\"\n Function : LN(ABS(round(val,300))) \n ----> LN(ABS(round(val,30))) <---- My hope\n\n Please teach me, \n\n Is there a difference of the calculation algorithm between 30 \nand 300 digits ?\n\n Is there a difference of something like CPU-dependence or like\ncompiler-dependence between 30 and 300 digits ?\n\n\n# If the answer is \"NO\", I think that the 300 digits case is \n not necessary once you are sure that it works, because \n\n 1. the 30 digits case is equivalent to the 300 digits case.\n 2. the 300 digits case is slow.\n 3. It is sufficiently large value even in 30 digits. \n\n--\nRegards.\n\nSAKAIDA Masaaki <[email protected]>\nOsaka, Japan\n\n",
"msg_date": "Thu, 01 Jul 1999 17:05:36 +0900",
"msg_from": "SAKAIDA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
}
] |
[
{
"msg_contents": "At 10:07 AM 6/24/99 +0200, Zeugswetter Andreas IZ5 wrote:\n\n>Are we really doing a sync after the pg_log write ? While the sync\n>after datablock write seems necessary to guarantee consistency,\n>the sync after log write is actually not necessary to guarantee consistency.\n>Would it be a first step, to special case the writing to pg_log, as\n>to not sync (extra switch to backend) ? This would avoid the syncs\n>for read only transactions, since they don't cause data block writes.\n\nThis sounds like a creative hack to me, if it actually works...it would\nsolve the problem I (and other users who do lots of fast tiny hits on the\ndb) see with my web serving site.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 24 Jun 1999 06:18:56 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] The dangers of \"-F\""
}
] |
[
{
"msg_contents": "The following message is send to me every day for about one month, I don't know what's happening with\npostmaster.\nIs there any body with the same problem ?\nPlease stop it.\n\[email protected] ha scritto:\n\n> Your message to Tiberiu Craciun <[email protected]> could not be completely delivered due to the\n\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n",
"msg_date": "Thu, 24 Jun 1999 15:34:39 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Mess"
},
{
"msg_contents": ">\n> The following message is send to me every day for about one month, I don't know what's happening with\n> postmaster.\n> Is there any body with the same problem ?\n> Please stop it.\n>\n> [email protected] ha scritto:\n>\n> > Your message to Tiberiu Craciun <[email protected]> could not be completely delivered due to the\n>\n\n Yes, I'm seeing them too. Unfortunately, all messages to the\n postmaster of that site (I've tried various addresses) bounce\n too.\n\n Tiberiu:\n\n Could you please tell your mail host administrator that he\n should immediately fix it. Sending INCOMPLETE error messages\n about INCOMPLETE deliveries suggests, that absolutely nothing\n can be delivered completely. What a useless mailing system.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 24 Jun 1999 16:46:10 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mess"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> Is there any body with the same problem ?\n>> Please stop it.\n>> \n>> [email protected] ha scritto:\n>> \n>>>> Your message to Tiberiu Craciun <[email protected]> could not be completely delivered due to the\n>> \n\n> Yes, I'm seeing them too. Unfortunately, all messages to the\n> postmaster of that site (I've tried various addresses) bounce\n> too.\n\nI have seen them too. AFAICT any posting to the pgsql-sql mailing\nlist draws one. Evidently there is a broken address on that list.\n\nI asked Marc to unsubscribe this person a while ago, but I guess\nhe's still giving 'em the benefit of the doubt.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jun 1999 19:33:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mess "
},
{
"msg_contents": "\nWant to tell me who it is?\n\nhub# grep sendit.se pgsql*\nhub# \n\n\nOn Thu, 24 Jun 1999, Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> >> Is there any body with the same problem ?\n> >> Please stop it.\n> >> \n> >> [email protected] ha scritto:\n> >> \n> >>>> Your message to Tiberiu Craciun <[email protected]> could not be completely delivered due to the\n> >> \n> \n> > Yes, I'm seeing them too. Unfortunately, all messages to the\n> > postmaster of that site (I've tried various addresses) bounce\n> > too.\n> \n> I have seen them too. AFAICT any posting to the pgsql-sql mailing\n> list draws one. Evidently there is a broken address on that list.\n> \n> I asked Marc to unsubscribe this person a while ago, but I guess\n> he's still giving 'em the benefit of the doubt.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 01:22:35 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mess "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Want to tell me who it is?\n\n> hub# grep sendit.se pgsql*\n> hub# \n\nHmm, that makes it harder. Any hits on \"Tiberiu\" ,\"Craciun\",\n\"4092483565\", or \"guest\"?\n\nI'd go look for myself, but the list membership files don't seem\nto be world-readable ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 10:34:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mess "
},
{
"msg_contents": "\nfound a 'tiberiu@' listing...let me know if it persists...\n\nOn Tue, 29 Jun 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Want to tell me who it is?\n> \n> > hub# grep sendit.se pgsql*\n> > hub# \n> \n> Hmm, that makes it harder. Any hits on \"Tiberiu\" ,\"Craciun\",\n> \"4092483565\", or \"guest\"?\n> \n> I'd go look for myself, but the list membership files don't seem\n> to be world-readable ...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 12:52:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Mess "
}
] |
[
{
"msg_contents": "I am using the Sun Workshop C 5.0, noticed that I was getting the\nfollowing errors:\n\"pgconnection.h\", line 65: Error: string is not defined.\n\"pgconnection.cc\", line 134: Error: string is not defined.\n\"pgconnection.cc\", line 139: Error: Cannot return char* from a function\nthat should return int.\n3 Error(s) detected.\n\nFuther looking noticed that the standard namespace was not being used\nfor the \"string\" typedef, so the following patch fixed that for me.\nThis is for the 6.5 tar release & the 6/24 CVS checkout.\n\nvlad: diff -w3c interfaces/libpq++/pgconnection.h.orig\ninterfaces/libpq++/pgconnection.h\n*** interfaces/libpq++/pgconnection.h.orig Thu Jun 24 10:49:54 1999\n\n--- interfaces/libpq++/pgconnection.h Thu Jun 24 10:48:31 1999\n***************\n*** 23,28 ****\n--- 23,34 ----\n #include <stdio.h>\n #include <string>\n\n+ #ifdef __sun__\n+ #ifndef __GNUC__\n+ using namespace std;\n+ #endif\n+ #endif\n+\n extern \"C\" {\n #include \"libpq-fe.h\"\n }\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n",
"msg_date": "Thu, 24 Jun 1999 11:23:32 -0500",
"msg_from": "Brian P Millett <[email protected]>",
"msg_from_op": true,
"msg_subject": "PATCH for pgconnection.h"
},
{
"msg_contents": "\nThis is the second mention of 'namespace' I have seen. Can we allow\nthis by default?\n\nOf course, I get:\n\n pgconnection.h:26: warning: namespaces are mostly broken in this version of g++\n\nbut it still works. Comments?\n\n> I am using the Sun Workshop C 5.0, noticed that I was getting the\n> following errors:\n> \"pgconnection.h\", line 65: Error: string is not defined.\n> \"pgconnection.cc\", line 134: Error: string is not defined.\n> \"pgconnection.cc\", line 139: Error: Cannot return char* from a function\n> that should return int.\n> 3 Error(s) detected.\n> \n> Futher looking noticed that the standard namespace was not being used\n> for the \"string\" typedef, so the following patch fixed that for me.\n> This is for the 6.5 tar release & the 6/24 CVS checkout.\n> \n> vlad: diff -w3c interfaces/libpq++/pgconnection.h.orig\n> interfaces/libpq++/pgconnection.h\n> *** interfaces/libpq++/pgconnection.h.orig Thu Jun 24 10:49:54 1999\n> \n> --- interfaces/libpq++/pgconnection.h Thu Jun 24 10:48:31 1999\n> ***************\n> *** 23,28 ****\n> --- 23,34 ----\n> #include <stdio.h>\n> #include <string>\n> \n> + #ifdef __sun__\n> + #ifndef __GNUC__\n> + using namespace std;\n> + #endif\n> + #endif\n> +\n> extern \"C\" {\n> #include \"libpq-fe.h\"\n> }\n> \n> \n> --\n> Brian Millett\n> Enterprise Consulting Group \"Heaven can not exist,\n> (314) 205-9030 If the family is not eternal\"\n> [email protected] F. Ballard Washburn\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 22:02:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "\nOn 08-Jul-99 Bruce Momjian wrote:\n> \n> This is the second mention of 'namespace' I have seen. Can we allow\n> this by default?\n> \n> Of course, I get:\n> \n> pgconnection.h:26: warning: namespaces are mostly broken in this version of g++\n> \n> but it still works. Comments?\n\nUm.. No? Are you also using SWC 5.0? And if it's g++, do we have a version\nbesides Sun's? Sorry, I'm not familiar with Sun stuff.\n\nVince.\n\n> \n>> I am using the Sun Workshop C 5.0, noticed that I was getting the\n>> following errors:\n>> \"pgconnection.h\", line 65: Error: string is not defined.\n>> \"pgconnection.cc\", line 134: Error: string is not defined.\n>> \"pgconnection.cc\", line 139: Error: Cannot return char* from a function\n>> that should return int.\n>> 3 Error(s) detected.\n>> \n>> Futher looking noticed that the standard namespace was not being used\n>> for the \"string\" typedef, so the following patch fixed that for me.\n>> This is for the 6.5 tar release & the 6/24 CVS checkout.\n>> \n>> vlad: diff -w3c interfaces/libpq++/pgconnection.h.orig\n>> interfaces/libpq++/pgconnection.h\n>> *** interfaces/libpq++/pgconnection.h.orig Thu Jun 24 10:49:54 1999\n>> \n>> --- interfaces/libpq++/pgconnection.h Thu Jun 24 10:48:31 1999\n>> ***************\n>> *** 23,28 ****\n>> --- 23,34 ----\n>> #include <stdio.h>\n>> #include <string>\n>> \n>> + #ifdef __sun__\n>> + #ifndef __GNUC__\n>> + using namespace std;\n>> + #endif\n>> + #endif\n>> +\n>> extern \"C\" {\n>> #include \"libpq-fe.h\"\n>> }\n>> \n>> \n>> --\n>> Brian Millett\n>> Enterprise Consulting Group \"Heaven can not exist,\n>> (314) 205-9030 If the family is not eternal\"\n>> [email protected] F. Ballard Washburn\n>> \n>> \n>> \n>> \n>> \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Wed, 07 Jul 1999 22:20:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "> \n> On 08-Jul-99 Bruce Momjian wrote:\n> > \n> > This is the second mention of 'namespace' I have seen. Can we allow\n> > this by default?\n> > \n> > Of course, I get:\n> > \n> > pgconnection.h:26: warning: namespaces are mostly broken in this version of g++\n> > \n> > but it still works. Comments?\n> \n> Um.. No? Are you also using SWC 5.0? And if it's g++, do we have a version\n> besides Sun's? Sorry, I'm not familiar with Sun stuff.\n> \n> Vince.\n\nI am using g++ in gcc 2.7.1.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 23:02:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > \n> > On 08-Jul-99 Bruce Momjian wrote:\n> > > \n> > > This is the second mention of 'namespace' I have seen. Can we allow\n> > > this by default?\n> > > \n> > > Of course, I get:\n> > > \n> > > pgconnection.h:26: warning: namespaces are mostly broken in this version of g++\n> > > \n> > > but it still works. Comments?\n> > \n> > Um.. No? Are you also using SWC 5.0? And if it's g++, do we have a version\n> > besides Sun's? Sorry, I'm not familiar with Sun stuff.\n> > \n> > Vince.\n> \n> I am using g++ in gcc 2.7.1.\n\nRemember way back when when I had a\n\n#include <something>\n\nand it caused problems for some people who really needed\n\n#include <something.h>\n\nNow I think (but am certainly not sure!) that the difference between the\ntwo was meant to be that in the first instance, the standard namespace\nis used. I can't really follow what's happening as\n\n% cvs status pgconnection.cc\nFatal error, aborting.\n: no such user\n\nWe just changed the first to the second as in g++ all that <something>\ndoes is to include <something.h>, but it may not be true for other compilers.\n\nHope someone with a reference book near them can check this!\n\nCheers,\n\nPatrick\n",
"msg_date": "Thu, 8 Jul 1999 10:31:11 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> \n>> + #ifdef __sun__\n>> + #ifndef __GNUC__\n>> + using namespace std;\n>> + #endif\n>> + #endif\n\nThe above is really, really ugly, not to say broken, because neither\nbeing on a Sun nor using gcc have anything to do with whether your\ncompiler handles namespaces. The problem we are looking at here is that\nthe C++ standard is a moving target, and some people have compilers that\nare newer than others.\n\nI think the proper solution is to add a configure-time test to see\nwhether a namespace declaration is needed. We could use configure to\nsee whether we need \".h\" on the end of C++ include file references, too.\n(That's another thing that's going to be site-dependent for a while to\ncome.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Jul 1999 10:40:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h "
},
{
"msg_contents": "On Thu, 8 Jul 1999, Tom Lane wrote:\n\n> I think the proper solution is to add a configure-time test to see\n> whether a namespace declaration is needed. We could use configure to\n> see whether we need \".h\" on the end of C++ include file references, too.\n> (That's another thing that's going to be site-dependent for a while to\n> come.)\n\nHmmm. I'm running 2.7.2.1 here and in the case of <string> I have a\nfile called: /usr/include/g++/string <-- note there's no .h on the end.\nAm I being dense here and missing something or does this differ from what\nother folks have? There's also a number of other files without the .h\nextension in that directory. string includes std/string.h.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 8 Jul 1999 11:04:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h "
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> On Thu, 8 Jul 1999, Tom Lane wrote:\n>> I think the proper solution is to add a configure-time test to see\n>> whether a namespace declaration is needed. We could use configure to\n>> see whether we need \".h\" on the end of C++ include file references, too.\n>> (That's another thing that's going to be site-dependent for a while to\n>> come.)\n\n> Hmmm. I'm running 2.7.2.1 here and in the case of <string> I have a\n> file called: /usr/include/g++/string <-- note there's no .h on the end.\n> Am I being dense here and missing something or does this differ from what\n> other folks have?\n\nSame as what I have, but I'm using gcc 2.7.2.2 so that's not real\nsurprising. I was under the impression that naming conventions for\nC++ library include files have changed at least once in the development\nof the C++ standards --- but I may be mistaken.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Jul 1999 11:30:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> \n> >> + #ifdef __sun__\n> >> + #ifndef __GNUC__\n> >> + using namespace std;\n> >> + #endif\n> >> + #endif\n\nYes, never applied.\n\n> The above is really, really ugly, not to say broken, because neither\n> being on a Sun nor using gcc have anything to do with whether your\n> compiler handles namespaces. The problem we are looking at here is that\n> the C++ standard is a moving target, and some people have compilers that\n> are newer than others.\n> \n> I think the proper solution is to add a configure-time test to see\n> whether a namespace declaration is needed. We could use configure to\n> see whether we need \".h\" on the end of C++ include file references, too.\n> (That's another thing that's going to be site-dependent for a while to\n> come.)\n\nI smell TODO list:\n\n\t* Add configure test to check for C++ need for *.h and namespaces\n\nAdded.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Jul 1999 23:50:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "> On Thu, 8 Jul 1999, Tom Lane wrote:\n> \n> > I think the proper solution is to add a configure-time test to see\n> > whether a namespace declaration is needed. We could use configure to\n> > see whether we need \".h\" on the end of C++ include file references, too.\n> > (That's another thing that's going to be site-dependent for a while to\n> > come.)\n> \n> Hmmm. I'm running 2.7.2.1 here and in the case of <string> I have a\n> file called: /usr/include/g++/string <-- note there's no .h on the end.\n> Am I being dense here and missing something or does this differ from what\n> other folks have? There's also a number of other files without the .h\n> extension in that directory. string includes std/string.h.\n> \n\nWe remove .h, and someone complains, we add .h, and someone complains,\nbut fewer people. Configure is the answer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Jul 1999 23:56:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "On Thu, 8 Jul 1999, Bruce Momjian wrote:\n\n> \n> > Bruce Momjian <[email protected]> writes:\n> > >> \n> > >> + #ifdef __sun__\n> > >> + #ifndef __GNUC__\n\nLooks like someone's mailer reinjected this one.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 26 Jul 1999 07:32:33 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "> On Thu, 8 Jul 1999, Bruce Momjian wrote:\n> \n> > \n> > > Bruce Momjian <[email protected]> writes:\n> > > >> \n> > > >> + #ifdef __sun__\n> > > >> + #ifndef __GNUC__\n> \n> Looks like someone's mailer reinjected this one.\n> \n\nWe removed the __sun__ line because it was considered strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 26 Jul 1999 09:40:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Looks like someone's mailer reinjected this one.\n\nYes, along with five others that were also about two weeks old.\nI recall having noticed several other such incidents in the recent\npast, all bearing headers that indicate the retransmitted messages\noriginally went from the mail list to one <[email protected]>.\n\nStart of reinjection of this one (note date):\n\nReceived: from hardy-1.a2000.nl ([127.0.0.1]) by hardy-1.a2000.nl\n (Netscape Messaging Server 3.6) with SMTP id AAA5185;\n Mon, 26 Jul 1999 13:34:05 +0200\n\nImmediately prior Received: lines:\n\nReceived: from smtp1.a2000.nl ([192.168.17.19]) by hardy-1.a2000.nl\n (Netscape Messaging Server 3.6) with ESMTP id AAT19D0\n for <[email protected]>; Fri, 9 Jul 1999 06:48:41 +0200\nReceived: from hub.org ([209.167.229.1])\n\tby smtp1.a2000.nl with esmtp (Exim 2.02 #4)\n\tid 112SYe-0002lN-00\n\tfor [email protected]; Fri, 9 Jul 1999 06:46:44 +0200\nReceived: from hub.org (hub.org [209.167.229.1])\n\tby hub.org (8.9.3/8.9.3) with ESMTP id AAA39425;\n\tFri, 9 Jul 1999 00:40:25 -0400 (EDT)\n\t(envelope-from [email protected])\n\nThis morning I sent a polite note to [email protected], warning\nthem that they've got a problem with mail looping. It promptly bounced\nback with\n\n [email protected]:\n SMTP error from remote mailer after RCPT TO:\n <[email protected]>:\n host hardy-1.a2000.nl [192.168.17.13]:\n 550 Invalid recipient <[email protected]>\n\n(Sending to [email protected] instead probably won't help, since it\nMX's to the same machines.)\n\nI conclude that a2000.nl is run by a bunch of idiots who can't read\nRFCs, let alone operate a mail server competently. I expect that\nwe will continue to get blessed with regurgitated messages until Marc\npulls any a2000.nl addresses from the mailing lists :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Jul 1999 09:45:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Mail loop at a2000.nl (was Re: [HACKERS] PATCH for pgconnection.h)"
},
{
"msg_contents": "On Mon, 26 Jul 1999, Bruce Momjian wrote:\n\n> > On Thu, 8 Jul 1999, Bruce Momjian wrote:\n> > \n> > > \n> > > > Bruce Momjian <[email protected]> writes:\n> > > > >> \n> > > > >> + #ifdef __sun__\n> > > > >> + #ifndef __GNUC__\n> > \n> > Looks like someone's mailer reinjected this one.\n> > \n> \n> We removed the __sun__ line because it was considered strange.\n> \n> \n\nLook at the date of the message. Someone from a2000.nl is reinjecting\nmessages to the list. That's all my note was about. I've seen a few\nof 'em so far this mourning.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 26 Jul 1999 09:46:21 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PATCH for pgconnection.h"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Received: from hardy-1.a2000.nl ([127.0.0.1]) by hardy-1.a2000.nl\n> (Netscape Messaging Server 3.6) with SMTP id AAA5185;\n> Mon, 26 Jul 1999 13:34:05 +0200\n>\n> Immediately prior Received: lines:\n>\n> Received: from smtp1.a2000.nl ([192.168.17.19]) by hardy-1.a2000.nl\n> (Netscape Messaging Server 3.6) with ESMTP id AAT19D0\n> for <[email protected]>; Fri, 9 Jul 1999 06:48:41 +0200\n\n And look at the dates!\n\n>\n> This morning I sent a polite note to [email protected], warning\n> them that they've got a problem with mail looping. It promptly bounced\n> back with\n>\n> [email protected]:\n> SMTP error from remote mailer after RCPT TO:\n> <[email protected]>:\n> host hardy-1.a2000.nl [192.168.17.13]:\n> 550 Invalid recipient <[email protected]>\n\n I've also tried to send to [email protected].\n Result: unknown host :-)\n\n But the CC to [email protected] didn't bounce. Maybe\n he'll receive it and can forward it in HARDCOPY to his\n pEstmaster.\n\n>\n> (Sending to [email protected] instead probably won't help, since it\n> MX's to the same machines.)\n>\n> I conclude that a2000.nl is run by a bunch of idiots who can't read\n> RFCs, let alone operate a mail server competently. I expect that\n> we will continue to get blessed with regurgitated messages until Marc\n> pulls any a2000.nl addresses from the mailing lists :-(\n\n NC\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 26 Jul 1999 17:03:48 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: Mail loop at a2000.nl (was Re: [HACKERS] PATCH for\n pgconnection.h)"
},
{
"msg_contents": "\ngoing...going...gone...\n\nOn Mon, 26 Jul 1999, Tom Lane wrote:\n\n> Vince Vielhaber <[email protected]> writes:\n> > Looks like someone's mailer reinjected this one.\n> \n> Yes, along with five others that were also about two weeks old.\n> I recall having noticed several other such incidents in the recent\n> past, all bearing headers that indicate the retransmitted messages\n> originally went from the mail list to one <[email protected]>.\n> \n> Start of reinjection of this one (note date):\n> \n> Received: from hardy-1.a2000.nl ([127.0.0.1]) by hardy-1.a2000.nl\n> (Netscape Messaging Server 3.6) with SMTP id AAA5185;\n> Mon, 26 Jul 1999 13:34:05 +0200\n> \n> Immediately prior Received: lines:\n> \n> Received: from smtp1.a2000.nl ([192.168.17.19]) by hardy-1.a2000.nl\n> (Netscape Messaging Server 3.6) with ESMTP id AAT19D0\n> for <[email protected]>; Fri, 9 Jul 1999 06:48:41 +0200\n> Received: from hub.org ([209.167.229.1])\n> \tby smtp1.a2000.nl with esmtp (Exim 2.02 #4)\n> \tid 112SYe-0002lN-00\n> \tfor [email protected]; Fri, 9 Jul 1999 06:46:44 +0200\n> Received: from hub.org (hub.org [209.167.229.1])\n> \tby hub.org (8.9.3/8.9.3) with ESMTP id AAA39425;\n> \tFri, 9 Jul 1999 00:40:25 -0400 (EDT)\n> \t(envelope-from [email protected])\n> \n> This morning I sent a polite note to [email protected], warning\n> them that they've got a problem with mail looping. It promptly bounced\n> back with\n> \n> [email protected]:\n> SMTP error from remote mailer after RCPT TO:\n> <[email protected]>:\n> host hardy-1.a2000.nl [192.168.17.13]:\n> 550 Invalid recipient <[email protected]>\n> \n> (Sending to [email protected] instead probably won't help, since it\n> MX's to the same machines.)\n> \n> I conclude that a2000.nl is run by a bunch of idiots who can't read\n> RFCs, let alone operate a mail server competently. I expect that\n> we will continue to get blessed with regurgitated messages until Marc\n> pulls any a2000.nl addresses from the mailing lists :-(\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 26 Jul 1999 13:53:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mail loop at a2000.nl (was Re: [HACKERS] PATCH for\n pgconnection.h)"
},
{
"msg_contents": "\n> I conclude that a2000.nl is run by a bunch of idiots who can't read\n> RFCs, let alone operate a mail server competently. I expect that\n> we will continue to get blessed with regurgitated messages until Marc\n> pulls any a2000.nl addresses from the mailing lists :-(\n\nCould be that they're idiots, if so I hope they can find some competent people\nto help them out there. They're the TV cable operators for Amsterdam and surroundings\nand have probably about half a million houses connected to their network.\nSince about a year and a half (I guess) they're into internet over cable modems, and\nactually the bandwidth they deliver is pretty impressive (especially if you're\nused to 28k8 modems ;)\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n",
"msg_date": "Tue, 27 Jul 1999 14:42:19 +0200",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mail loop at a2000.nl (was Re: [HACKERS] PATCH for\n pgconnection.h)"
},
{
"msg_contents": "At 17:03 26-7-99 +0200, Jan Wieck wrote:\n> And look at the dates!\n>> This morning I sent a polite note to [email protected], warning\n>> them that they've got a problem with mail looping. It promptly bounced\n>> back with\n> But the CC to [email protected] didn't bounce. Maybe\n> he'll receive it and can forward it in HARDCOPY to his\n> pEstmaster.\n\nI'm really sorry that my ISP is doing wonderful :-) things to it's\nmail-servers \n\n>> I conclude that a2000.nl is run by a bunch of idiots who can't read\n>> RFCs, let alone operate a mail server competently. I expect that\n\nthey are a bunch of idiots and complaining doesn't help a bit, in the past\ncouple of weeks everyone in amsterdam who has cable internet, has had\nextreme trouble with email.\n\n>> we will continue to get blessed with regurgitated messages until Marc\n>> pulls any a2000.nl addresses from the mailing lists :-(\n\nIf the problem persists with a2000 messages looping back please let me know\nand I will unsubscribe from the list and resubscribe using another email\naddress\n\n\n",
"msg_date": "Sun, 01 Aug 1999 02:08:30 +0200",
"msg_from": "gravity <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mail loop at a2000.nl (was Re: [HACKERS] PATCH for\n pgconnection.h)"
}
] |
[
{
"msg_contents": "After the discussion about implementing a flag that\nwould selectively disable fsynch on the pg_log file,\nI visited xact.c and tried a little test. \n\nThe code in RecordTransactionCommit looks essentially like\n(ignoring stuff related to leaks)\n\nFlushBufferPool /* flush and fsync the data blocks */\nTransactionIdCommit /* log the fact that the transaction's done */\nFlushBufferPool /* flush and fsync pg_log and whatever else\n has changed during this brief period of time */\n\nI just added a couple of lines of code that saves\ndisableFsync and sets it true before the second call\nto FlushBufferPool, restoring it to its original state\nafterwards.\n\nRunning without \"-F\", my disk is blessedly silent when\nI access my web pages that hit the database several times\nwith read-only selects used to customize the presentation\nto the user. \n\nCool!\n\nSo...does it sound like I'm doing the right thing?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Thu, 24 Jun 1999 11:59:17 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "fsynch of pg_log write.."
},
{
"msg_contents": "Don Baccus wrote:\n> \n> FlushBufferPool /* flush and fsync the data blocks */\n> TransactionIdCommit /* log the fact that the transaction's done */\n> FlushBufferPool /* flush and fsync pg_log and whatever else\n> has changed during this brief period of time */\n> \n> I just added a couple of lines of code that saves\n> disableFsync and sets it true before the second call\n> to FlushBufferPool, restoring it to its original state\n> afterwards.\n\n...\n\n> So...does it sound like I'm doing the right thing?\n\nIt's bad in the case of concurrent writes, because of\nsecond FlushBufferPool \"flushes whatever else has changed during \nthis brief period of time\".\n\nRight way is just set some flag in WriteBuffer()/WriteNoReleaseBuffer()\nand don't do \n\nFlushBufferPool\nTransactionIdCommit\nFlushBufferPool\n\nat all when this flag is not setted.\n\nI'll do it for 6.5.1 if no one else...\n\nVadim\n",
"msg_date": "Fri, 25 Jun 1999 10:51:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": "greetings postgres gurus:\n\ni've scoured the mail list logs and found many requests for assistance\nwith this problem:\n\n``i just built the new pgsql release and now psql quits with a\nsegmentation fault every time i run it''\n\nbut can not find any answer - maybe all these people worked it out on\ntheir own or abandoned postgres. does anyone have a hint? \n\ni'm guessing its something to do with versions of readline/glibc but don't\nhave a clue what to try upgrading/downgrading or patching to make\neverything work in harmony. everything builds without complaint and the\npostmaster appears to be running just fine - psql just segfaults.\n\ndoes anyone have a solution to build psql that worked for them? \n\ni'm building postgres ~6.5 from cvs on intel/pentium redhat linux boxes\nwith glibc 2.x (libc6) readline 2.2.1 and linux kernel 2.2.10+ using pgcc\n(egcs 1.1.2).\n\nprofuse thanks to anyone who may have a moment to suggest some help and\nCc: directly to my email address.\n\ncan't wait to try out the 6.5.x postgres features!\n\n--\nmailto:[email protected] sean dreilinger, mlis\n http://www.savvysearch.com http://durak.org/sean\n",
"msg_date": "Thu, 24 Jun 1999 17:44:32 -0700",
"msg_from": "sean dreilinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "solution for psql segmentation fault ??"
}
] |
[
{
"msg_contents": "greetings postgres gurus:\n\ni've scoured the mail list logs and found many requests for assistance\nwith this problem:\n\n``i just built the new pgsql release and now psql quits with a\nsegmentation fault every time i run it''\n\nbut can not find any answer - maybe all these people worked it out on\ntheir own or abandoned postgres. does anyone have a hint? \n\ni'm guessing its something to do with versions of readline/glibc but don't\nhave a clue what to try upgrading/downgrading or patching to make\neverything work in harmony. everything builds without complaint and the\npostmaster appears to be running just fine - psql just segfaults.\n\ndoes anyone have a solution to build psql that worked for them? \n\ni'm building postgres ~6.5 from cvs on intel/pentium redhat linux boxes\nwith glibc 2.x (libc6) readline 2.2.1 and linux kernel 2.2.10+ using pgcc\n(egcs 1.1.2).\n\nprofuse thanks to anyone who may have a moment to suggest some help and\nCc: directly to my email address.\n\ncan't wait to try out the 6.5.x postgres features!\n\n--\nmailto:[email protected] sean dreilinger, mlis\n http://www.savvysearch.com http://durak.org/sean\n",
"msg_date": "Thu, 24 Jun 1999 17:51:14 -0700",
"msg_from": "sean dreilinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "solution for psql segmentation fault ??"
},
{
"msg_contents": "sean dreilinger <[email protected]> writes:\n> i've scoured the mail list logs and found many requests for assistance\n> with this problem:\n> ``i just built the new pgsql release and now psql quits with a\n> segmentation fault every time i run it''\n\nYou have? This is the first I've heard of it ...\n\nCan you provide a gdb backtrace from the corefile?\n\n> i'm guessing its something to do with versions of readline/glibc but don't\n\nIf you suspect readline, there's a switch you can give to prevent psql\nfrom using readline --- does psql -n behave any differently?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Jun 1999 09:21:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] solution for psql segmentation fault ?? "
},
{
"msg_contents": "> i've scoured the mail list logs and found many requests for assistance\n> with this problem:\n> ``i just built the new pgsql release and now psql quits with a\n> segmentation fault every time i run it''\n\nMethinks you are overstating the case a bit; you didn't need to, since\nwe would have tried to help anyway ;)\n\n> but can not find any answer - maybe all these people worked it out on\n> their own or abandoned postgres.\n\n??\n\n> i'm building postgres ~6.5 from cvs on intel/pentium redhat linux \n> boxes with glibc 2.x (libc6) readline 2.2.1 and linux kernel 2.2.10+ \n> using pgcc (egcs 1.1.2).\n\nA RH-5.2/glibc-2.0.x/linux-2.0.36 machine is the \"standard\" platform\nfor Postgres. I haven't bumped to the 2.2 kernel, but afaik there are\nno fundamental problems. Oleg in Russia was testing bleeding edge\nkernels and Postgres some time ago, but I'm pretty sure that things\nsettled down for him.\n\nThe almost deafening silence to your inquiry might indicate that folks\ncan't see what the problem might be, since they have been successful\nthemselves. I'm not sure what vintage cvs tree you are using; if it\npredates the official v6.5 release then all bets are off since some\nnewly reported bugs were being fixed (and re-fixed) almost up to the\nrelease date.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 27 Jun 1999 04:18:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] solution for psql segmentation fault ??"
},
{
"msg_contents": "Tom Lane wrote:\n> sean dreilinger <[email protected]> writes:\n> > i've scoured the mail list logs and found many requests for assistance\n> > with this problem:\n> > ``i just built the new pgsql release and now psql quits with a\n> > segmentation fault every time i run it''\n\n> You have? This is the first I've heard of it ...\n\nits in the archive, mentioned around the time of each new release, maybe\nsince 6.2?\n\n> > i'm guessing its something to do with versions of readline/glibc but don't\n> If you suspect readline, there's a switch you can give to prevent psql\n> from using readline --- does psql -n behave any differently?\n\nreadline circa redhat linux 5.1 was installed here, i built\nreadline-2.2.1-5.src.rpm from updates.redhat.com and psql from the\npostgres 6.5 cvs repository worked immediately without recompiling.\n\nthis evening i rebuilt the entire postgres distribution, everything runs\ngreat!\n\n> Can you provide a gdb backtrace from the corefile?\n\nif this would be helpful in developing postgres i'm willing to put back\nthe old readline and crash psql a few more times :-). \n\nthanks very much to the folks who took time to rsvp with $.02!\n\n--sean\n\n\n--\nmailto:[email protected] sean dreilinger, mlis\n http://www.savvysearch.com http://durak.org/sean\n",
"msg_date": "Sat, 26 Jun 1999 23:03:29 -0700",
"msg_from": "sean dreilinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] solution for psql segmentation fault ??"
}
] |
[
{
"msg_contents": "> Date: Thu, 24 Jun 1999 01:53:05 -0500\n> From: Jim Rowan <[email protected]> \n> Subject: trouble creating log table with rules\n> \n> I've read the docs in the programmers manual, and can create rules like this:\n> \n> CREATE RULE \"m_log_change\" AS ON UPDATE TO \"machine\"\n> do (\n> INSERT INTO machine_log (who, date, machnum, col, newval)\n> SELECT getpgusername(), 'now'::text, old.machnum,\n> \t 'host', new.host\n> WHERE (new.host != old.host) or \n> \t(old.host IS NOT NULL and new.host IS NULL) or\n> \t\t(old.host IS NULL and new.host IS NOT NULL);\n> \n> INSERT INTO machine_log (who, date, machnum, col, newval)\n> SELECT getpgusername(), 'now'::text, old.machnum,\n> \t 'serial_num_cpu', new.serial_num_cpu\n> WHERE (new.serial_num_cpu != old.serial_num_cpu) or \n> \t(old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or\n> \t\t(old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);\n> );\n> \n> My big problem is that if I replicate this enough times to cover the fields I \n> want, I get this error:\n> \n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible. Terminating.\n> \n> Is there a way I can avoid this error? Is there a better way to code these\n> rules? \n\nHi,\n\nI've seen similar kinds of behaviour in 6.4.2 with\ntriggers/rules/procedures and so on where the backend would die randomly.\nI'm not sure if it is still in 6.5, I haven't used it yet. Have a look at\nthe error log from the postmaster and see if there is anything interesting\nin there and I might be able to help you some more here. Sometimes you\nmight get a BTP_CHAIN fault, or another one (I can't rememember - I\nhaven't seen it in a while). The solution I found was just before adding\nyour procedures or whatever, do a VACUUM ANALYZE pg_proc, which will\nvacuum one of the internal system tables, and then it would work. I found\nthat without the vacuum, postgres would die every third or fourth time I\ntried to reload my triggers, etc.\n\nAlso, I haven't reported this yet (because I can't reproduce it) but every\nso often, I've found that you'll do the vacuum, and then it will return\n\"Blowaway_relation_buffers returned -2\" and the vacuum dies. This is\nreally bad, and so you would normally dump the data and reload, but you\ncan't do this for pg_proc. So the dbms is screwed and you have to reload\nthe whole thing. It turns out that one of the indices or the table itself\nhas this BTP_CHAIN problem. \n\nI did some experiments involving trying to trick postgres into allowing me\nto dump reload it (ie, create a new table called pg_proc_2, with the same\ndata and indices, and moving it into place but it won't let you do it to\nprotect itself.\n\nThe worst part with this kind of death is that my database is about 1.1 Gb\non disk, and so reloading is NOT something I want to have to do :)\n\nAnyone got any advice for this or know of a problem? As mentioned in\nanother email posted to the hackers list, I am getting lots of problems\nwith BTP_CHAIN problems and having to reload tables, which is not\nsomething I want to do during the day when staff are trying to use the\ndatabase and I have to shut it down. I've heard there is a patch for this\nbut I haven't got anything back on whether its ok to use it or not.\n\nbye,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n",
"msg_date": "Fri, 25 Jun 1999 16:22:36 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble creating log table with rules"
},
{
"msg_contents": ">\n> > Date: Thu, 24 Jun 1999 01:53:05 -0500\n> > From: Jim Rowan <[email protected]>\n> > Subject: trouble creating log table with rules\n> >\n> > I've read the docs in the programmers manual, and can create rules like this:\n> >\n> > CREATE RULE \"m_log_change\" AS ON UPDATE TO \"machine\"\n> > do (\n> > INSERT INTO machine_log (who, date, machnum, col, newval)\n> > SELECT getpgusername(), 'now'::text, old.machnum,\n> > 'host', new.host\n> > WHERE (new.host != old.host) or\n> > (old.host IS NOT NULL and new.host IS NULL) or\n> > (old.host IS NULL and new.host IS NOT NULL);\n> >\n> > INSERT INTO machine_log (who, date, machnum, col, newval)\n> > SELECT getpgusername(), 'now'::text, old.machnum,\n> > 'serial_num_cpu', new.serial_num_cpu\n> > WHERE (new.serial_num_cpu != old.serial_num_cpu) or\n> > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or\n> > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);\n> > );\n> >\n> > My big problem is that if I replicate this enough times to cover the fields I\n> > want, I get this error:\n> >\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > We have lost the connection to the backend, so further processing is impossible. Terminating.\n> >\n\n You didn't tell us which version of PostgreSQL and (more\n important) if the error occurs during CREATE RULE or when\n updating machine.\n\n If it occurs during the CREATE RULE (what I hope for you) it\n doesn't happen in the rewriter itself. For the rule actions\n in the example above it isn't important in which order they\n are processed. So you could setup single action rules per\n field to get (mostly) the same results.\n\n If you can create the entire multi action rule but get the\n backend crash during UPDATE of machine, then it's a problem\n in the rewriter which I cannot imagine looking at your rules.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Fri, 25 Jun 1999 14:09:28 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: trouble creating log table with rules"
},
{
"msg_contents": "\nI said:\n\n> > CREATE RULE \"m_log_change\" AS ON UPDATE TO \"machine\"\n> > do (\n> > INSERT INTO machine_log (who, date, machnum, col, newval)\n> > SELECT getpgusername(), 'now'::text, old.machnum,\n> > 'host', new.host\n> > WHERE (new.host != old.host) or\n> > (old.host IS NOT NULL and new.host IS NULL) or\n> > (old.host IS NULL and new.host IS NOT NULL);\n> >\n> > INSERT INTO machine_log (who, date, machnum, col, newval)\n> > SELECT getpgusername(), 'now'::text, old.machnum,\n> > 'serial_num_cpu', new.serial_num_cpu\n> > WHERE (new.serial_num_cpu != old.serial_num_cpu) or\n> > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or\n> > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL);\n> > );\n> > My big problem is that if I replicate this enough times to cover the fields I\n> > want, I get this error:\n> >\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > We have lost the connection to the backend, so further processing is impossible. Terminating.\n> >\n\nwieck> You didn't tell us which version of PostgreSQL and (more\nwieck> important) if the error occurs during CREATE RULE or when updating\nwieck> machine.\n\nDuhhh. sorry!\npostgresql 6.5; FreeBSD 3.2 stable - recent.\n\nThe error occurs during CREATE RULE.\n\nwieck> So you could setup single action rules per field to get (mostly)\nwieck> the same results. \n\nI previously had tried to do the same thing with many (more than 10) distinct \nsingle-action rules (sorry, don't have the exact syntax of what I used.. but\nit was very similar to this example.).\n\nIn that case, the CREATE RULE worked properly, but at update time it bombed\nout (again, don't have the detail anymore). The error message indicated that \nit thought there was a loop in my rules, something about \"more than 10\"... \nIn that case, as I remember, the backend did not crash -- it just declined to \nexecute the update.\n\nI'll try multiple multi-action rules to see if I can do what I want..\n\nIs this (the way I'm writing the rules) the best approach?\n\n\nJim Rowan\t\t\tDCSI\t DCE/DFS/Sysadmin Consulting\[email protected] (512) 374-1143\n",
"msg_date": "Fri, 25 Jun 1999 15:48:25 -0500",
"msg_from": "Jim Rowan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: trouble creating log table with rules "
}
] |
[
{
"msg_contents": "\tVadim wrote:\n> Right way is just set some flag in WriteBuffer()/WriteNoReleaseBuffer()\n> and don't do \n> \n> FlushBufferPool\n> TransactionIdCommit\n> FlushBufferPool\n> \n> at all when this flag is not setted.\n> \nWhile this is even much better for select only transactions\nit will still do the second flush for writers.\nThis flush is not needed for those, that are only interested\nin consistency, and don't care if the last transaction before\nsystem/backend crash is lost.\nIt can actually really only be the very last transaction reported\nok to any client, that is rolled back, since all other xactions\nwill be flushed by this same first FlushBufferPool \n(since BufferSync currently flushes all dirty Pages).\nSo IMHO a switch to avoid the second FlushBufferPool\nwould still be useful, even with this suggested fix.\n\nAndreas\n",
"msg_date": "Fri, 25 Jun 1999 10:18:57 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
},
{
"msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> Vadim wrote:\n> > Right way is just set some flag in WriteBuffer()/WriteNoReleaseBuffer()\n> > and don't do\n> >\n> > FlushBufferPool\n> > TransactionIdCommit\n> > FlushBufferPool\n> >\n> > at all when this flag is not setted.\n> >\n> While this is even much better for select only transactions\n> it will still do the second flush for writers.\n> This flush is not needed for those, that are only interested\n> in consistency, and don't care if the last transaction before\n> system/backend crash is lost.\n> It can actually really only be the very last transaction reported\n> ok to any client, that is rolled back, since all other xactions\n> will be flushed by this same first FlushBufferPool\n> (since BufferSync currently flushes all dirty Pages).\n> So IMHO a switch to avoid the second FlushBufferPool\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> would still be useful, even with this suggested fix.\n\nI didn't object this.\n\nVadim\n",
"msg_date": "Mon, 28 Jun 1999 09:22:08 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": "I am from Brazil. I need to obtain information about some topics\ndatabase postgres.\n\n Topics\n- Concorency control (controle de concorr�ncia)\n- Recuperation and atomicity (recupera��o e atomicidade)\n- Log (log)\n- Recuperation based in log (recupera��o baseada em log)\n- Recuperation fault (recupera��o de falhas)\n- Recuperation fault based in log (recupera��o de falhas baseadas em\nlog)\n\nplease, send to [email protected]\n-- \n-----------------------------------------------------------------------\n Rafael Roggia Friedrich \n [email protected] \n [email protected]\n http://frank.detec.unijui.tche.br/~rafael\n Departamento de Tecnologia - Inform�tica\n Uniju�-Universidade Regional do Noroeste do Estado do Rio Grande do Sul\n------------------------------------------------------------------------\n",
"msg_date": "Fri, 25 Jun 1999 12:22:30 +0100",
"msg_from": "rafael <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres"
}
] |
[
{
"msg_contents": "At 10:18 AM 6/25/99 +0200, Zeugswetter Andreas IZ5 wrote:\n>\tVadim wrote:\n>> Right way is just set some flag in WriteBuffer()/WriteNoReleaseBuffer()\n>> and don't do \n>> \n>> FlushBufferPool\n>> TransactionIdCommit\n>> FlushBufferPool\n>> \n>> at all when this flag is not setted.\n\n>While this is even much better for select only transactions\n>it will still do the second flush for writers.\n>This flush is not needed for those, that are only interested\n>in consistency, and don't care if the last transaction before\n>system/backend crash is lost.\n>It can actually really only be the very last transaction reported\n>ok to any client, that is rolled back, since all other xactions\n>will be flushed by this same first FlushBufferPool \n>(since BufferSync currently flushes all dirty Pages).\n>So IMHO a switch to avoid the second FlushBufferPool\n>would still be useful, even with this suggested fix.\n\nThat was what I was wondering when I saw Vadim's post,\nbut seeing as yesterday was the first time I'd ever\ndug into the Postgres source, I didn't really feel I\nwas on solid ground.\n\nObviously, skipping the entire flush/log id/flush cycle\nfor read only selects is the RIGHT thing to do. As is\nensuring that flushing the buffers only flushes those\nmodified by the transaction in question rather than\nflushing the world...\n\nFor now, though, I don't mind living with my simple\nhack if indeed it simply means I risk losing a transaction\nduring a crash. Or, actually, have simply increased that risk\n(the sequence flush/log id/CRASH is possible, after all).\n\nI'm a lot more comfortable with this than with the potential\ndamage done during a crash when fsync'ing both log file and\ndata is disabled, when the log can then be written by the\nsystem followed by a crash before the data tuples make it\nto disk.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Fri, 25 Jun 1999 06:03:17 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": "At 11:23 AM 6/25/99 +0200, Zeugswetter Andreas IZ5 wrote:\n\n>> So...does it sound like I'm doing the right thing?\n>> \n>> I don't see how this could be, since the first FlushBuffer still does \n>> the sync.\n\nOh-uh, you're right of course. The first select doesn't hit\nthe disk, the next one does during the first flugh. Silly me,\nwhere was my head?\n\nSigh.\n\nSo, Vadim's \"right way\" is also the \"only way\", it would appear.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Fri, 25 Jun 1999 06:13:11 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": "\n> For now, though, I don't mind living with my simple\n> hack if indeed it simply means I risk losing a transaction\n> during a crash. Or, actually, have simply increased that risk\n> (the sequence flush/log id/CRASH is possible, after all).\n> \nNo. This is why Vadim wants the second flush. If the machine \ncrashes like you describe the client will not be told \"transaction\ncommitted\". The problem is when a client is told something, \nthat is not true after a crash, which can happen if the second\nflush is left out.\n\n> I'm a lot more comfortable with this than with the potential\n> damage done during a crash when fsync'ing both log file and\n> data is disabled, when the log can then be written by the\n> system followed by a crash before the data tuples make it\n> to disk.\n> \nYes, this is a much better situation.\n\nAndreas\n",
"msg_date": "Fri, 25 Jun 1999 15:41:34 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
},
{
"msg_contents": "> \n> > For now, though, I don't mind living with my simple\n> > hack if indeed it simply means I risk losing a transaction\n> > during a crash. Or, actually, have simply increased that risk\n> > (the sequence flush/log id/CRASH is possible, after all).\n> > \n> No. This is why Vadim wants the second flush. If the machine \n> crashes like you describe the client will not be told \"transaction\n> committed\". The problem is when a client is told something, \n> that is not true after a crash, which can happen if the second\n> flush is left out.\n\nBut commercial db's do that. They return 'done' for every query, while\nthey write they log files ever X seconds. We need to allow this. No\nreason to be more reliable than commercial db's by default. Or, at\nleast we need to give them the option because the speed advantage is\nhuge.\n\n\n> > I'm a lot more comfortable with this than with the potential\n> > damage done during a crash when fsync'ing both log file and\n> > data is disabled, when the log can then be written by the\n> > system followed by a crash before the data tuples make it\n> > to disk.\n> > \n> Yes, this is a much better situation.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 25 Jun 1999 09:55:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": "\n> > committed\". The problem is when a client is told something, \n> > that is not true after a crash, which can happen if the second\n> > flush is left out.\n> \n> But commercial db's do that. They return 'done' for every query, while\n> they write they log files ever X seconds. We need to allow this. No\n> reason to be more reliable than commercial db's by default. Or, at\n> least we need to give them the option because the speed advantage is\n> huge.\n> \nI agree, but commercial db's don't do that. \nOracle does not (only on Linux).\nInformix only does it when you specially create the database\n(create database dada with buffered log;) I always use it :-)\nInformix has a log buffer, which is flushed at transaction commit\n(unbuffered logging) or when the buffer is full (buffered logging).\nNone of them do any \"every X seconds stuff\".\n\nAndreas\n",
"msg_date": "Fri, 25 Jun 1999 16:10:19 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] fsynch of pg_log write.."
},
{
"msg_contents": "> \n> > > committed\". The problem is when a client is told something, \n> > > that is not true after a crash, which can happen if the second\n> > > flush is left out.\n> > \n> > But commercial db's do that. They return 'done' for every query, while\n> > they write they log files ever X seconds. We need to allow this. No\n> > reason to be more reliable than commercial db's by default. Or, at\n> > least we need to give them the option because the speed advantage is\n> > huge.\n> > \n> I agree, but commercial db's don't do that. \n> Oracle does not (only on Linux).\n> Informix only does it when you specially create the database\n> (create database dada with buffered log;) I always use it :-)\n> Informix has a log buffer, which is flushed at transaction commit\n> (unbuffered logging) or when the buffer is full (buffered logging).\n> None of them do any \"every X seconds stuff\".\n\nYes! All my clients use Informix buffered logging. Now, these are law\nfirms running their billing systems using Informix. The 'buffer full'\nwrite is kind of limited in that it does not give a good time limit on\nvulnerability. It has to do this because it wants to write a full tape\nblock. Newer versions worked around this with some kind of intermediate\nfix(not sure). Anyway, having a time limit in the fsync will give us\ngoo performance with a reliable/limited exposure to risk.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 25 Jun 1999 11:37:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] fsynch of pg_log write.."
}
] |
[
{
"msg_contents": ">Does postgres contain any kind of support for utf-8\n\nYes. If you build PostgreSQL with --with-mb, you could make a UTF-8\nencoded database.\n\n$ createdb -E UNIOCE\n\n>or does anybody have a C-source/example/description of conversion \n>of cyrillic from utf-8 to koi-8 ? \n>(rfc2044/2279 is already read by me ;-)) )\n\nGo www.unicode.org. If you were lucky you will find a conversion table\nfor koi-8.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 27 Jun 1999 00:55:55 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: utf-8 "
}
] |
[
{
"msg_contents": "Hi,\n\n\tI have a problem where an action in a PL function depends on a table\nwhich is determined at run-time. So the steps are the following:\n\n1. trigger gets passed a row\n2. table name is looked up in a reference table, depending on a field in\nrow\n3. rows are deleted from the table\n\nIn PL this cannot be done, as the execution plans are built once, so\nthat the tables are fixed. The only PL solution I've come up with is a\ngiant IF-THEN-ELSE statements, which is not terribly practical and hard\nto change. \n\nI thought of writing this in SPI, but the WHERE part of the statement is\ncausing me problems, as I cannot pass variable numbers of arguments. (I\ndid see the variable numbers of arguments for triggers, but did not know\nhow to use this for normal functions.) These deletes happen in several\nroutines and the number of rows deleted changes according to the\nroutine. So I would have to implement a separate SPI function for every\ncase. \n\nSo what I really need is some type of 'eval' in PL that builds the query\nplan at runtime, but I have no idea how hard this would be to implement.\nI was thinking along the lines of\n\nEVAL ''DELETE FROM % WHERE date > %'',tb_name,tb_date;\n\nI guess this probably opens a whole can of worms -- especially if the\nexecuted statement is a SELECT and you want to do something with the\nresult.\n\nIf anybody has any other suggestions on how to handle this situation, I\nwould be grateful.\n\nThanks,\n\nAdriaan\n",
"msg_date": "Mon, 28 Jun 1999 17:45:09 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding \"eval\" to pl?"
},
{
"msg_contents": ">\n> Hi,\n>\n> I have a problem where an action in a PL function depends on a table\n> which is determined at run-time. So the steps are the following:\n>\n> 1. trigger gets passed a row\n> 2. table name is looked up in a reference table, depending on a field in\n> row\n> 3. rows are deleted from the table\n>\n> In PL this cannot be done, as the execution plans are built once, so\n> that the tables are fixed. The only PL solution I've come up with is a\n> giant IF-THEN-ELSE statements, which is not terribly practical and hard\n> to change.\n\n This is entirely true for PL/pgSQL. But it isn't for PL/Tcl\n where you have control over which statements get\n prepared/saved and which not.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 28 Jun 1999 19:02:06 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Adding \"eval\" to pl?"
}
] |
[
{
"msg_contents": "We are two weeks after the 6.5 release, and the anticipated\nproblems/patches/porting issues never really materialized.\n\nI would suspect that fewer people are using PostgreSQL, but I know that\nis not true. Seems the two months of beta really got out the bugs.\n\nWhat do people want to do now? Does anyone want to start on 6.6? Do we\nwant to release 6.5.1? Should we relax for a few more weeks and bask in\nthe stable release?\n\nI am looking for comments.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Jun 1999 22:10:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5.1 status"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> We are two weeks after the 6.5 release, and the anticipated\n> problems/patches/porting issues never really materialized.\n> I would suspect that fewer people are using PostgreSQL, but I know that\n> is not true. Seems the two months of beta really got out the bugs.\n\nI think we did pretty well.\n\n> What do people want to do now? Does anyone want to start on 6.6? Do we\n> want to release 6.5.1? Should we relax for a few more weeks and bask in\n> the stable release?\n\nIMHO we do need to make a 6.5.1 release --- I know I have a couple of\nstupid bugs in 6.5 :-(. But it is not urgent; we could wait another\ncouple of weeks and see if anything else pops up.\n\nProbably the nearer decision is when to split the tree for 6.6.\nIs anyone ready to start on new stuff for 6.6? The argument that\nwe wanted to avoid double-patching is looking less compelling with\nso few patches, so I'm ready to see a tree split as soon as anyone\nhas anything to commit into 6.6 only...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 1999 23:46:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > We are two weeks after the 6.5 release, and the anticipated\n> > problems/patches/porting issues never really materialized.\n> > I would suspect that fewer people are using PostgreSQL, but I know that\n> > is not true. Seems the two months of beta really got out the bugs.\n> \n> I think we did pretty well.\n\nFirst time for last two years -:)).\n\n> > What do people want to do now? Does anyone want to start on 6.6? Do we\n> > want to release 6.5.1? Should we relax for a few more weeks and bask in\n> > the stable release?\n> \n> IMHO we do need to make a 6.5.1 release --- I know I have a couple of\n> stupid bugs in 6.5 :-(. But it is not urgent; we could wait another\n> couple of weeks and see if anything else pops up.\n> \n> Probably the nearer decision is when to split the tree for 6.6.\n> Is anyone ready to start on new stuff for 6.6? The argument that\n> we wanted to avoid double-patching is looking less compelling with\n> so few patches, so I'm ready to see a tree split as soon as anyone\n> has anything to commit into 6.6 only...\n\nI have nothing yet. But I just made changes to avoid\ndisk writes for read-only transactions (now speed of\nselects is the same as with -F) - should I put it\nin 6.5.1 or wait for 6.6?\n\nVadim\n",
"msg_date": "Tue, 29 Jun 1999 12:04:16 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "> > I think we did pretty well.\n> \n> First time for last two years -:)).\n\nYes, Tom, this is not typical for post-release. Your helping\ncertainly contributed to this quietness.\n\n> \n> > > What do people want to do now? Does anyone want to start on 6.6? Do we\n> > > want to release 6.5.1? Should we relax for a few more weeks and bask in\n> > > the stable release?\n> > \n> > IMHO we do need to make a 6.5.1 release --- I know I have a couple of\n> > stupid bugs in 6.5 :-(. But it is not urgent; we could wait another\n> > couple of weeks and see if anything else pops up.\n> > \n> > Probably the nearer decision is when to split the tree for 6.6.\n> > Is anyone ready to start on new stuff for 6.6? The argument that\n> > we wanted to avoid double-patching is looking less compelling with\n> > so few patches, so I'm ready to see a tree split as soon as anyone\n> > has anything to commit into 6.6 only...\n> \n> I have nothing yet. But I just made changes to avoid\n> disk writes for read-only transactions (now speed of\n> selects is the same as with -F) - should I put it\n> in 6.5.1 or wait for 6.6?\n\nIf you are happy with it, would be nice for 6.5.1. Should speed things\nup very much.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 00:04:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Vadim Mikheev wrote:\n\n> Tom Lane wrote:\n> > \n> > Bruce Momjian <[email protected]> writes:\n> > > We are two weeks after the 6.5 release, and the anticipated\n> > > problems/patches/porting issues never really materialized.\n> > > I would suspect that fewer people are using PostgreSQL, but I know that\n> > > is not true. Seems the two months of beta really got out the bugs.\n> > \n> > I think we did pretty well.\n> \n> First time for last two years -:)).\n\nWe must be starting to know what we are doing, eh? :)\n\n> > > What do people want to do now? Does anyone want to start on 6.6? Do we\n> > > want to release 6.5.1? Should we relax for a few more weeks and bask in\n> > > the stable release?\n> > \n> > IMHO we do need to make a 6.5.1 release --- I know I have a couple of\n> > stupid bugs in 6.5 :-(. But it is not urgent; we could wait another\n> > couple of weeks and see if anything else pops up.\n> > \n> > Probably the nearer decision is when to split the tree for 6.6.\n> > Is anyone ready to start on new stuff for 6.6? The argument that\n> > we wanted to avoid double-patching is looking less compelling with\n> > so few patches, so I'm ready to see a tree split as soon as anyone\n> > has anything to commit into 6.6 only...\n> \n> I have nothing yet. But I just made changes to avoid\n> disk writes for read-only transactions (now speed of\n> selects is the same as with -F) - should I put it\n> in 6.5.1 or wait for 6.6?\n\nOpinion: if you feel safe, throw it in for v6.5.1. \n\nLet's scheduale a v6.5.1 for July 15th, and a 6.6 split for, let's say,\nMonday? That way those that want to can start in on 6.6, but we give a\ncouple of more weeks to \"relax\" for those that want to...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 01:17:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> > I have nothing yet. But I just made changes to avoid\n> > disk writes for read-only transactions (now speed of\n> > selects is the same as with -F) - should I put it\n> > in 6.5.1 or wait for 6.6?\n> \n> Opinion: if you feel safe, throw it in for v6.5.1.\n\nWell, I do.\n\n> \n> Let's scheduale a v6.5.1 for July 15th, and a 6.6 split for, let's say,\n> Monday? That way those that want to can start in on 6.6, but we give a\n> couple of more weeks to \"relax\" for those that want to...\n\nOk for me.\n\nVadim\n",
"msg_date": "Tue, 29 Jun 1999 12:27:51 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "> > First time for last two years -:)).\n> \n> We must be starting to know what we are doing, eh? :)\n\nInteresting outlook. :-)\n\n> Opinion: if you feel safe, throw it in for v6.5.1. \n> \n> Let's scheduale a v6.5.1 for July 15th, and a 6.6 split for, let's say,\n> Monday? That way those that want to can start in on 6.6, but we give a\n> couple of more weeks to \"relax\" for those that want to...\n\nSounds good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 00:30:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "> > > First time for last two years -:)).\n> > We must be starting to know what we are doing, eh? :)\n> Interesting outlook. :-)\n\nHmm. It is also the longest beta, longest period between releases, and\nlongest slip in release date. Not that those are bad; we apparently\ngot a solid release out of it.\n\nI still can't believe that Vadim pulled off his huge changes! btw, I\nupgraded a small production server at work which does plain-vanilla\nSQL stuff and found that the following was *really* all it took to\nupgrade from v6.4.2:\n\n pg_dumpall -z > file.pg_dumpall\n shutdown server\n change soft link to new tree\n build and install s/w\n initdb\n start new server\n copy pg_hba.conf\n psql < file.pg_dumpall\n\nAbout 10 minutes total elapsed time. Pretty impressive imho.\n\nI will have some patches for v6.5.1 (and v6.6, but they may be\nsuperceded during the cycle by Tom Lane's suggestions) to allow the\nPostgres packaged apps to be built as shared libraries. The patches\nare almost trivial, though the build sequence to actually get\ndynamically-linked apps is not.\n\nAlso, there is a new version of pgaccess which we could incorporate,\nand the ODBC driver could be refreshed. I could also fix a few typos\nin the docs...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 29 Jun 1999 05:13:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: Tuesday, June 29, 1999 11:11 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] 6.5.1 status\n> \n> \n> We are two weeks after the 6.5 release, and the anticipated\n> problems/patches/porting issues never really materialized.\n> \n> I would suspect that fewer people are using PostgreSQL, but I know that\n> is not true. Seems the two months of beta really got out the bugs.\n> \n> What do people want to do now? Does anyone want to start on 6.6? Do we\n> want to release 6.5.1? Should we relax for a few more weeks and bask in\n> the stable release?\n> \n> I am looking for comments.\n> \n\nI have 2 questions for 6.5.1.\n\n1. Currently we couldn't create an index on numeric type.\n Is it difficult to add numeric_ops ? \n\n2. I love Oracle TRUNCATE statement.\n Marcus Mascari [[email protected]] has already implemented\n this feature,though I haven't seen his implementation yet.\n If his implementation is right,could this new feature be added to \n 6.5.1 ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n",
"msg_date": "Wed, 30 Jun 1999 09:36:37 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "\nNeither of these two can be added to version 6.5.1...the minor releases\nare meant to be 'non-dump/initdb, bug fix only' releases, and, at least\nfor number 1, adding a numeric_ops would require a dump/reload ...\n\nAs for the TRUNCATE, take a look at the oracle_compat.c file in util/adt\nand supply a patch that will add the appropriate functionality, and I\ndon't see why it can't b eadded fo r6.6 ...\n\n\nOn Wed, 30 Jun 1999, Hiroshi Inoue wrote:\n\n> \n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Bruce Momjian\n> > Sent: Tuesday, June 29, 1999 11:11 AM\n> > To: PostgreSQL-development\n> > Subject: [HACKERS] 6.5.1 status\n> > \n> > \n> > We are two weeks after the 6.5 release, and the anticipated\n> > problems/patches/porting issues never really materialized.\n> > \n> > I would suspect that fewer people are using PostgreSQL, but I know that\n> > is not true. Seems the two months of beta really got out the bugs.\n> > \n> > What do people want to do now? Does anyone want to start on 6.6? Do we\n> > want to release 6.5.1? Should we relax for a few more weeks and bask in\n> > the stable release?\n> > \n> > I am looking for comments.\n> > \n> \n> I have 2 questions for 6.5.1.\n> \n> 1. Currently we couldn't create an index on numeric type.\n> Is it difficult to add numeric_ops ? \n> \n> 2. I love Oracle TRUNCATE statement.\n> Marcus Mascari [[email protected]] has already implemented\n> this feature,though I haven't seen his implementation yet.\n> If his implementation is right,could this new feature be added to \n> 6.5.1 ?\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 30 Jun 1999 00:16:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "> Neither of these two can be added to version 6.5.1...the minor releases\n> are meant to be 'non-dump/initdb, bug fix only' releases, and, at least\n> for number 1, adding a numeric_ops would require a dump/reload ...\n\nThis could be done as a contrib package for the v6.5.x series. Are you\ninterested in doing this?\n\n> As for the TRUNCATE, take a look at the oracle_compat.c file in util/adt\n> and supply a patch that will add the appropriate functionality, and I\n> don't see why it can't b eadded fo r6.6 ...\n\nIf TRUNCATE is some \"stringy function thing\", then that is the place\nto look. Isn't it some table deletion short-circuit capability\nthough?? If so, it is not as trivial...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 30 Jun 1999 13:12:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
},
{
"msg_contents": ">\n> > Neither of these two can be added to version 6.5.1...the minor releases\n> > are meant to be 'non-dump/initdb, bug fix only' releases, and, at least\n> > for number 1, adding a numeric_ops would require a dump/reload ...\n>\n\nI see.\n\n> This could be done as a contrib package for the v6.5.x series. Are you\n> interested in doing this?\n>\n\nHmmm,I don't know the right way to do so.\n\nAFAIC it is necessary to insert new OID entries into pg_opclass.h and\npg_amproc.h . Is it right ? And is it all ?\nIf so,I would try.\nBTW how do I get new System OID ?\n\n> > As for the TRUNCATE, take a look at the oracle_compat.c file in util/adt\n> > and supply a patch that will add the appropriate functionality, and I\n> > don't see why it can't b eadded fo r6.6 ...\n>\n> If TRUNCATE is some \"stringy function thing\", then that is the place\n> to look. Isn't it some table deletion short-circuit capability\n> though?? If so, it is not as trivial...\n>\n\nTRUNCATE statement deletes all rows of the target table quickly and\ncould not be rollbacked.\nMarcus Mascari [[email protected]] posted a patch today.\nAt first glance his story seems right,though it needs more checking.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Thu, 1 Jul 1999 10:33:49 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] 6.5.1 status"
},
{
"msg_contents": "> > This could be done as a contrib package for the v6.5.x series. Are you\n> > interested in doing this?\n> Hmmm,I don't know the right way to do so.\n> AFAIC it is necessary to insert new OID entries into pg_opclass.h and\n> pg_amproc.h . Is it right ? And is it all ?\n> If so,I would try.\n> BTW how do I get new System OID ?\n\nNo, you can and should do this using the extensibility features of\nPostgres (to make it available in v6.5.x). The docs have an example\ndating back to the Chen/Jolly days on how. Look in the Programmer's\nGuide in the chapter called \"Interfacing Extensions To Indices\", which\nis also Chapter 36 in the integrated docs.\n\nThe other option is to develop patches to the .h files and have them\navailable when the next full release is out, in ~4-5 months.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 01 Jul 1999 03:35:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5.1 status"
}
] |
[
{
"msg_contents": "grant/revoke does not work in NetBSD/m68k. This is due to the wrong\nassumption that sizeof(AclItem) is equal to 8 in all platforms. I am\ngoing to fix this by replacing all occurrence of sizeof(AclItem) to\nACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\npatches. If there's no objection, I will commit them. Comments?\n---\nTatsuo Ishii\n\n--------------------------- cut here ----------------------------------\n*** pgsql/src/backend/utils/adt/acl.c~\tWed May 26 01:11:49 1999\n--- pgsql/src/backend/utils/adt/acl.c\tTue Jun 29 09:18:18 1999\n***************\n*** 235,241 ****\n \tif (!s)\n \t\telog(ERROR, \"aclitemin: null string\");\n \n! \taip = (AclItem *) palloc(sizeof(AclItem));\n \tif (!aip)\n \t\telog(ERROR, \"aclitemin: palloc failed\");\n \ts = aclparse(s, aip, &modechg);\n--- 235,241 ----\n \tif (!s)\n \t\telog(ERROR, \"aclitemin: null string\");\n \n! \taip = (AclItem *) palloc(ACLITEM_SIZE);\n \tif (!aip)\n \t\telog(ERROR, \"aclitemin: palloc failed\");\n \ts = aclparse(s, aip, &modechg);\n***************\n*** 445,460 ****\n \t\t{\t\t\t\t\t\t/* end */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tnum * sizeof(AclItem));\n \t\t}\n \t\telse\n \t\t{\t\t\t\t\t\t/* middle */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tdst * sizeof(AclItem));\n \t\t\tmemmove((char *) (new_aip + dst + 1),\n \t\t\t\t\t(char *) (old_aip + dst),\n! \t\t\t\t\t(num - dst) * sizeof(AclItem));\n \t\t}\n \t\tnew_aip[dst].ai_id = mod_aip->ai_id;\n \t\tnew_aip[dst].ai_idtype = mod_aip->ai_idtype;\n--- 445,460 ----\n \t\t{\t\t\t\t\t\t/* end */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tnum * ACLITEM_SIZE);\n \t\t}\n \t\telse\n \t\t{\t\t\t\t\t\t/* middle */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tdst * ACLITEM_SIZE);\n \t\t\tmemmove((char *) (new_aip + dst + 1),\n \t\t\t\t\t(char *) (old_aip + dst),\n! \t\t\t\t\t(num - dst) * ACLITEM_SIZE);\n \t\t}\n \t\tnew_aip[dst].ai_id = mod_aip->ai_id;\n \t\tnew_aip[dst].ai_idtype = mod_aip->ai_idtype;\n***************\n*** 493,499 ****\n \t\t\t}\n \t\t\tARR_DIMS(new_acl)[0] = num - 1;\n \t\t\t/* Adjust also the array size because it is used for memmove */\n! \t\t\tARR_SIZE(new_acl) -= sizeof(AclItem);\n \t\t\tbreak;\n \t\t}\n \t}\n--- 493,499 ----\n \t\t\t}\n \t\t\tARR_DIMS(new_acl)[0] = num - 1;\n \t\t\t/* Adjust also the array size because it is used for memmove */\n! \t\t\tARR_SIZE(new_acl) -= ACLITEM_SIZE;\n \t\t\tbreak;\n \t\t}\n \t}\n***************\n*** 556,571 ****\n \t\t{\t\t\t\t\t\t/* end */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tnew_num * sizeof(AclItem));\n \t\t}\n \t\telse\n \t\t{\t\t\t\t\t\t/* middle */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tdst * sizeof(AclItem));\n \t\t\tmemmove((char *) (new_aip + dst),\n \t\t\t\t\t(char *) (old_aip + dst + 1),\n! \t\t\t\t\t(new_num - dst) * sizeof(AclItem));\n \t\t}\n \t}\n \treturn new_acl;\n--- 556,571 ----\n \t\t{\t\t\t\t\t\t/* end */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tnew_num * ACLITEM_SIZE);\n \t\t}\n \t\telse\n \t\t{\t\t\t\t\t\t/* middle */\n \t\t\tmemmove((char *) new_aip,\n \t\t\t\t\t(char *) old_aip,\n! \t\t\t\t\tdst * ACLITEM_SIZE);\n \t\t\tmemmove((char *) (new_aip + dst),\n \t\t\t\t\t(char *) (old_aip + dst + 1),\n! \t\t\t\t\t(new_num - dst) * ACLITEM_SIZE);\n \t\t}\n \t}\n \treturn new_acl;\n***************\n*** 682,688 ****\n \tChangeACLStmt *n = makeNode(ChangeACLStmt);\n \tchar\t\tstr[MAX_PARSE_BUFFER];\n \n! \tn->aclitem = (AclItem *) palloc(sizeof(AclItem));\n \n \t/* the grantee string is \"G <group_name>\", \"U <user_name>\", or \"ALL\" */\n \tif (grantee[0] == 'G')\t\t/* group permissions */\n--- 682,688 ----\n \tChangeACLStmt *n = makeNode(ChangeACLStmt);\n \tchar\t\tstr[MAX_PARSE_BUFFER];\n \n! \tn->aclitem = (AclItem *) palloc(ACLITEM_SIZE);\n \n \t/* the grantee string is \"G <group_name>\", \"U <user_name>\", or \"ALL\" */\n \tif (grantee[0] == 'G')\t\t/* group permissions */\n*** pgsql/src/include/catalog/pg_type.h~\tWed May 26 01:13:48 1999\n--- pgsql/src/include/catalog/pg_type.h\tTue Jun 29 09:13:46 1999\n***************\n*** 341,348 ****\n DATA(insert OID = 1025 ( _tinterval PGUID -1 -1 f b t \\054 0 704 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1026 ( _filename PGUID -1 -1 f b t \\054 0 605 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1027 ( _polygon\t PGUID -1 -1 f b t \\054 0 604 array_in array_out array_in array_out d _null_ ));\n- /* Note: the size of an aclitem needs to match sizeof(AclItem) in acl.h */\n DATA(insert OID = 1033 ( aclitem\t PGUID 8 -1 f b t \\054 0 0 aclitemin aclitemout aclitemin aclitemout i _null_ ));\n DESCR(\"access control list\");\n DATA(insert OID = 1034 ( _aclitem\t PGUID -1 -1 f b t \\054 0 1033 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1040 ( _macaddr\t PGUID -1 -1 f b t \\054 0 829 array_in array_out array_in array_out i _null_ ));\n--- 341,348 ----\n DATA(insert OID = 1025 ( _tinterval PGUID -1 -1 f b t \\054 0 704 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1026 ( _filename PGUID -1 -1 f b t \\054 0 605 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1027 ( _polygon\t PGUID -1 -1 f b t \\054 0 604 array_in array_out array_in array_out d _null_ ));\n DATA(insert OID = 1033 ( aclitem\t PGUID 8 -1 f b t \\054 0 0 aclitemin aclitemout aclitemin aclitemout i _null_ ));\n+ #define ACLITEM_SIZE 8\n DESCR(\"access control list\");\n DATA(insert OID = 1034 ( _aclitem\t PGUID -1 -1 f b t \\054 0 1033 array_in array_out array_in array_out i _null_ ));\n DATA(insert OID = 1040 ( _macaddr\t PGUID -1 -1 f b t \\054 0 829 array_in array_out array_in array_out i _null_ ));\n*** pgsql/src/include/utils/acl.h~\tSun Feb 14 08:22:14 1999\n--- pgsql/src/include/utils/acl.h\tTue Jun 29 09:17:40 1999\n***************\n*** 24,29 ****\n--- 24,30 ----\n \n #include <nodes/parsenodes.h>\n #include <utils/array.h>\n+ #include <catalog/pg_type.h>\n \n /*\n * AclId\t\tsystem identifier for the user, group, etc.\n***************\n*** 79,84 ****\n--- 80,92 ----\n /* Note: if the size of AclItem changes,\n change the aclitem typlen in pg_type.h */\n \n+ /* There used to be a wrong assumption that sizeof(AclItem) was\n+ always same in all platforms.\n+ Of course this is not true for certain platform (for example\n+ NetBSD/m68k). For now we use ACLITEM_SIZE defined in catalog/pg_type.h\n+ instead of sizeof(AclItem) -- 1999/6/29 Tatsuo\n+ */\n+ \n /*\n * The value of the first dimension-array element.\tSince these arrays\n * always have a lower-bound of 0, this is the same as the number of\n***************\n*** 94,100 ****\n #define ACL_NUM(ACL)\t\t\tARR_DIM0(ACL)\n #define ACL_DAT(ACL)\t\t\t((AclItem *) ARR_DATA_PTR(ACL))\n #define ACL_N_SIZE(N) \\\n! \t\t((unsigned) (ARR_OVERHEAD(1) + ((N) * sizeof(AclItem))))\n #define ACL_SIZE(ACL)\t\t\tARR_SIZE(ACL)\n \n /*\n--- 102,108 ----\n #define ACL_NUM(ACL)\t\t\tARR_DIM0(ACL)\n #define ACL_DAT(ACL)\t\t\t((AclItem *) ARR_DATA_PTR(ACL))\n #define ACL_N_SIZE(N) \\\n! \t\t((unsigned) (ARR_OVERHEAD(1) + ((N) * ACLITEM_SIZE)))\n #define ACL_SIZE(ACL)\t\t\tARR_SIZE(ACL)\n \n /*\n",
"msg_date": "Tue, 29 Jun 1999 12:01:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "acl problem in NetBSD/m68k"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n> assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n> going to fix this by replacing all occurrence of sizeof(AclItem) to\n> ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n> patches. If there's no objection, I will commit them. Comments?\n\nI do not like this patch at *all*. Why is sizeof(AclItem) not the\ncorrect thing to use? Replacing it with a hardwired \"8\" seems like\na step backwards --- not to mention a direct contradiction of what\nyou claim the patch is doing.\n\nPerhaps the real problem is that the AclItem struct definition needs\nmodification? Or maybe we need a way to put a machine-dependent size\ninto the pg_type entry for type aclitem? The latter seems like a\ngood thing to be able to do on general principles.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 1999 23:41:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n> > assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n> > going to fix this by replacing all occurrence of sizeof(AclItem) to\n> > ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n> > patches. If there's no objection, I will commit them. Comments?\n> \n> I do not like this patch at *all*. Why is sizeof(AclItem) not the\n> correct thing to use? Replacing it with a hardwired \"8\" seems like\n> a step backwards --- not to mention a direct contradiction of what\n> you claim the patch is doing.\n> \n> Perhaps the real problem is that the AclItem struct definition needs\n> modification? Or maybe we need a way to put a machine-dependent size\n> into the pg_type entry for type aclitem? The latter seems like a\n> good thing to be able to do on general principles.\n\nThis makes a lot of sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Jun 1999 23:59:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": ">> grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n>> assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n>> going to fix this by replacing all occurrence of sizeof(AclItem) to\n>> ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n>> patches. If there's no objection, I will commit them. Comments?\n>\n>I do not like this patch at *all*. Why is sizeof(AclItem) not the\n>correct thing to use?\n\nIn NetBSD/m68k sizeof(AclItem) = 6, not 8.\n\n>Replacing it with a hardwired \"8\" seems like\n>a step backwards --- not to mention a direct contradiction of what\n>you claim the patch is doing.\n\nIt's already hard wired in pg_type.h, isn't it.\n\n>Perhaps the real problem is that the AclItem struct definition needs\n>modification? Or maybe we need a way to put a machine-dependent size\n>into the pg_type entry for type aclitem? The latter seems like a\n>good thing to be able to do on general principles.\n\nGlad to hear you have better idea. Anyway, NetBSD/m68k users need some \nway to fix the problem now, since the problem seems very serious.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 29 Jun 1999 13:17:06 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": "One item on this. Let's try to get a proper fix that does not require\nan initdb for 6.5.1 for m68 users.\n\n> >> grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n> >> assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n> >> going to fix this by replacing all occurrence of sizeof(AclItem) to\n> >> ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n> >> patches. If there's no objection, I will commit them. Comments?\n> >\n> >I do not like this patch at *all*. Why is sizeof(AclItem) not the\n> >correct thing to use?\n> \n> In NetBSD/m68k sizeof(AclItem) = 6, not 8.\n> \n> >Replacing it with a hardwired \"8\" seems like\n> >a step backwards --- not to mention a direct contradiction of what\n> >you claim the patch is doing.\n> \n> It's already hard wired in pg_type.h, isn't it.\n> \n> >Perhaps the real problem is that the AclItem struct definition needs\n> >modification? Or maybe we need a way to put a machine-dependent size\n> >into the pg_type entry for type aclitem? The latter seems like a\n> >good thing to be able to do on general principles.\n> \n> Glad to hear you have better idea. Anyway, NetBSD/m68k users need some \n> way to fix the problem now, since the problem seems very serious.\n> --\n> Tatsuo Ishii\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 00:29:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I do not like this patch at *all*. Why is sizeof(AclItem) not the\n>> correct thing to use?\n\n> In NetBSD/m68k sizeof(AclItem) = 6, not 8.\n\nOh, I see: the struct contains int32, uint8, uint8, and so it will\nbe padded to a multiple of int32's alignment requirement --- which\nis 4 most places but only 2 on m68k.\n\n>> Perhaps the real problem is that the AclItem struct definition needs\n>> modification? Or maybe we need a way to put a machine-dependent size\n>> into the pg_type entry for type aclitem? The latter seems like a\n>> good thing to be able to do on general principles.\n>\n> Glad to hear you have better idea. Anyway, NetBSD/m68k users need some \n> way to fix the problem now, since the problem seems very serious.\n\nThere are two ways we could attack this: (1) put a \"pad\" field into\nstruct AclItem (prolly two uint8s) to try to ensure that compilers\nwould think it is 8 bytes long, or (2) make the size field for aclitem\nin pg_type.h read \"sizeof(AclItem)\". I think the latter is a better\nlong-term solution, because it eliminates having to try to guess\nwhat a compiler will do with a struct declaration. But there are\nseveral possible counterarguments:\n\n* It might require patching the scripts that read pg_type.h --- I\nam not sure if they'd work unmodified.\n\n* We'd either need to #include acl.h into pg_type.h or push the\ndeclarations for AclItem into some more-widely-used header.\n\n* In theory this would represent an initdb change and couldn't\nbe applied before 6.6. In practice, Postgres isn't working right\nnow on any platform where sizeof(AclItem) != 8, so initdb would\n*not* be needed for any working installation.\n\nI don't think any of these counterarguments is a big deal, but\nmaybe someone else will have a different opinion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 00:46:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": ">One item on this. Let's try to get a proper fix that does not require\n>an initdb for 6.5.1 for m68 users.\n\nOk. no initdb means we cannot change the length of data type aclitem\n(currently 8). I will propose another patch soon (probably change the\nAciItem structure).\n\nBTW, I believe Linux/m68k has the same problem. Can someone confirm\nthis?\n\ngrant insert on table to somebody;\n\\z\n\nshows some strange output on NetBSD/m68k.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 29 Jun 1999 13:50:48 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": "> There are two ways we could attack this: (1) put a \"pad\" field into\n> struct AclItem (prolly two uint8s) to try to ensure that compilers\n> would think it is 8 bytes long, or (2) make the size field for aclitem\n> in pg_type.h read \"sizeof(AclItem)\". I think the latter is a better\n> long-term solution, because it eliminates having to try to guess\n> what a compiler will do with a struct declaration. But there are\n> several possible counterarguments:\n> \n> * It might require patching the scripts that read pg_type.h --- I\n> am not sure if they'd work unmodified.\n> \n> * We'd either need to #include acl.h into pg_type.h or push the\n> declarations for AclItem into some more-widely-used header.\n> \n> * In theory this would represent an initdb change and couldn't\n> be applied before 6.6. In practice, Postgres isn't working right\n> now on any platform where sizeof(AclItem) != 8, so initdb would\n> *not* be needed for any working installation.\n> \n> I don't think any of these counterarguments is a big deal, but\n> maybe someone else will have a different opinion.\n\nMy guess is that we are looking at different solutions for 6.5.1 and\n6.6. A good argument for a source tree split.\n\nCurrently, initdb runs through pg_type.h using sed/awk, so it can't\nsee any of the sizeof() defines. One hokey solution would be to have\nthe compile process run a small C program that dumps out the acl size\ninto a file, and have initdb pick up that. That is a terrible solution,\nthough. I guess we don't have any other 'struct' data types that need\nto know the size of the struct on a give OS. Maybe padding with an\nAssert() to make sure it stays at the fixed size we specify is a good\nsolution.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 01:07:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> There are two ways we could attack this: (1) put a \"pad\" field into\n>> struct AclItem (prolly two uint8s) to try to ensure that compilers\n>> would think it is 8 bytes long, or (2) make the size field for aclitem\n>> in pg_type.h read \"sizeof(AclItem)\". I think the latter is a better\n>> long-term solution, because it eliminates having to try to guess\n>> what a compiler will do with a struct declaration.\n\n> Currently, initdb runs through pg_type.h using sed/awk, so it can't\n> see any of the sizeof() defines.\n\nHmm, that does put a bit of a crimp in the idea :-(\n\n> I guess we don't have any other 'struct' data types that need\n> to know the size of the struct on a give OS.\n\nRight now I think all the other ones are either single-type structs (eg\npoint is two float8s, so no padding) or varlena. But this is something\nthat will come up again, I foresee...\n\n> Maybe padding with an Assert() to make sure it stays at the fixed size\n> we specify is a good solution.\n\nI agree, that's probably an OK patch for now. When we have more than\none such type it'll probably be time to bite the bullet and implement\na clean solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 10:05:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> > There are two ways we could attack this: (1) put a \"pad\" field into\n> > struct AclItem (prolly two uint8s) to try to ensure that compilers\n> > would think it is 8 bytes long, or (2) make the size field for aclitem\n> > in pg_type.h read \"sizeof(AclItem)\". I think the latter is a better\n> > long-term solution, because it eliminates having to try to guess\n> > what a compiler will do with a struct declaration. But there are\n> > several possible counterarguments:\n> \n> Currently, initdb runs through pg_type.h using sed/awk, so it can't\n> see any of the sizeof() defines. One hokey solution would be to have\n> the compile process run a small C program that dumps out the acl size\n> into a file, and have initdb pick up that. That is a terrible solution,\n> though. I guess we don't have any other 'struct' data types that need\n> to know the size of the struct on a give OS. Maybe padding with an\n> Assert() to make sure it stays at the fixed size we specify is a good\n> solution.\n\nPerhaps it would be easier to pipe the output of cpp on pg_type.h thru\nthe awk/sed script? This would have the added advantage of making\nother system-dependent changes to pg_type.h easier.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "29 Jun 1999 10:52:34 -0400",
"msg_from": "Brian E Gallew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": "\nDid we ever fix this?\n\n\n\n> >> grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n> >> assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n> >> going to fix this by replacing all occurrence of sizeof(AclItem) to\n> >> ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n> >> patches. If there's no objection, I will commit them. Comments?\n> >\n> >I do not like this patch at *all*. Why is sizeof(AclItem) not the\n> >correct thing to use?\n> \n> In NetBSD/m68k sizeof(AclItem) = 6, not 8.\n> \n> >Replacing it with a hardwired \"8\" seems like\n> >a step backwards --- not to mention a direct contradiction of what\n> >you claim the patch is doing.\n> \n> It's already hard wired in pg_type.h, isn't it.\n> \n> >Perhaps the real problem is that the AclItem struct definition needs\n> >modification? Or maybe we need a way to put a machine-dependent size\n> >into the pg_type entry for type aclitem? The latter seems like a\n> >good thing to be able to do on general principles.\n> \n> Glad to hear you have better idea. Anyway, NetBSD/m68k users need some \n> way to fix the problem now, since the problem seems very serious.\n> --\n> Tatsuo Ishii\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 22:06:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Did we ever fix this?\n\nWe agreed what to do: add padding field(s) to struct AclItem and\nadd an Assert() somewhere that would check that sizeof(AclItem) is 8,\nwhile leaving the bulk of the code using sizeof(AclItem) rather than\na #define constant.\n\nBut it doesn't look like it got done yet.\n\n\t\t\tregards, tom lane\n\n>>>>> grant/revoke does not work in NetBSD/m68k. This is due to the wrong\n>>>>> assumption that sizeof(AclItem) is equal to 8 in all platforms. I am\n>>>>> going to fix this by replacing all occurrence of sizeof(AclItem) to\n>>>>> ACLITEM_SIZE (newly defined as 8 in catalog/pg_type.h). See included\n>>>>> patches. If there's no objection, I will commit them. Comments?\n>>>> \n>>>> I do not like this patch at *all*. Why is sizeof(AclItem) not the\n>>>> correct thing to use?\n>> \n>> In NetBSD/m68k sizeof(AclItem) = 6, not 8.\n>> \n>>>> Replacing it with a hardwired \"8\" seems like\n>>>> a step backwards --- not to mention a direct contradiction of what\n>>>> you claim the patch is doing.\n>> \n>> It's already hard wired in pg_type.h, isn't it.\n>> \n>>>> Perhaps the real problem is that the AclItem struct definition needs\n>>>> modification? Or maybe we need a way to put a machine-dependent size\n>>>> into the pg_type entry for type aclitem? The latter seems like a\n>>>> good thing to be able to do on general principles.\n>> \n>> Glad to hear you have better idea. Anyway, NetBSD/m68k users need some \n>> way to fix the problem now, since the problem seems very serious.\n>> --\n>> Tatsuo Ishii\n",
"msg_date": "Thu, 08 Jul 1999 09:56:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Did we ever fix this?\n> \n> We agreed what to do: add padding field(s) to struct AclItem and\n> add an Assert() somewhere that would check that sizeof(AclItem) is 8,\n> while leaving the bulk of the code using sizeof(AclItem) rather than\n> a #define constant.\n> \n> But it doesn't look like it got done yet.\n\nOK, I have added the needed padding, and added an Assert, with comments.\n\nTatsuo, can you check the problem platform please?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Jul 1999 23:32:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k"
},
{
"msg_contents": ">OK, I have added the needed padding, and added an Assert, with comments.\n>\n>Tatsuo, can you check the problem platform please?\n\nThanks. I should have done it myself, but didn't have time for it.\nPlease let me know when you finish the job. I will check on a\nNetBSD/m68 machine.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 09 Jul 1999 13:40:07 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] acl problem in NetBSD/m68k "
}
] |
[
{
"msg_contents": "I'm leaving tomorrow morning and won't be back before July 16th. So if there\nis a problem with ecpg you either have to fix it yourself or wait for my\nreturn. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 29 Jun 1999 12:30:10 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Leaving for vacation"
}
] |
[
{
"msg_contents": "Folks,\n\nNot sure if this is the right place to request this, but here are some\nthings I, as a satisfied user of PostgreSQL, would like to see done (and\nI'd be glad to help where I can). All of these are just suggestions\ngeared to the care and feeding of the PostgreSQL user community.\n\n1. Update the comparison chart at\nhttp://www.postgresql.org/comp-comparison.html. This is important for\nthose of us who must justify our choice of PostgreSQL to clients,\nsupervisors or funding agencies. Suggestion: add Informix and MySQL and\ndrop BeagleSQL and MiniSQL.\n\n2. Post a schedule for future releases. This is important for those of\nus who want to know when -- if ever -- we can start to consider\nPostgreSQL as a solution for projects that require features that are not\nyet part of PostgreSQL (e.g. replication), and when we should think\nabout upgrading our PostgreSQL installations. It is also crucial to let\nprospective users know that Postgres is under active development. I\nknow there is a todo list somewhere but I think the schedule needs to be\nmore prominent on the web site.\n\n3. Fix the PostgreSQL user gallery (linked from\nhttp://www.postgresql.org/helpus.html).\n\n4. Provide a better feature request method. Mailing lists are a great\nstart. But I'd like to know how many people are requesting which\nfeatures, whether there is a work-around, if there is a documentation or\na terminology issue that causes people to continue to request features\nthat are already in PostgreSQL, and what people have decided to do\n(upgrade to later version, go with another database, redesign their\nsytem, etc.).\n\nI think two tables would capture this information: one containing\nfeature, which release (if any) of PostgreSQL supports or will support\nthe feature, work-around, documentation issue, and terminology issue;\nand the other containing reference to feature, name and address of\nperson requesting feature, why feature is needed, and how person\nresolved the feature request. I assume the PostgreSQL web site can be\nbacked by a PostgreSQL database. Just to clarify, these tables would\ncapture feedback from users (via a web form or e-mail messages) in a\nmore structured and detailed format than a mailing list or the current\ntodo list, and provide a way for PostgreSQL hackers to \"close out\"\nfeature requests.\n\n5. Install a bug tracking system. I guess the todo list is working\npretty well because the quality of the latest release is very good, but\nI haven't been able to figure where else to search for things that look\nlike bugs to me, except against the mailing lists. Often the discussion\nof a bug on the (many) mailing lists morphs into something else without\nappearing on the todo list and I'm left unsure if the bug has been fixed\nor not. As a user relying on PostgreSQL, I'd feel better if the method\nused to track bugs was more centralized, transparent and structured.\n\nMaybe some of this stuff can be addressed by the new commercial support\nfor PostgreSQL.\n\nAll in all, PostgreSQL is making great strides and works well. Keep up\nthe good work!\n\nFred Horch\n",
"msg_date": "Tue, 29 Jun 1999 14:16:12 -0400",
"msg_from": "Fred Wilson Horch <[email protected]>",
"msg_from_op": true,
"msg_subject": "User requests now that 6.5 is out"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Fred Wilson Horch wrote:\n\n> Folks,\n> \n> Not sure if this is the right place to request this, but here are some\n> things I, as a satisfied user of PostgreSQL, would like to see done (and\n> I'd be glad to help where I can). All of these are just suggestions\n> geared to the care and feeding of the PostgreSQL user community.\n> \n> 1. Update the comparison chart at\n> http://www.postgresql.org/comp-comparison.html. This is important for\n> those of us who must justify our choice of PostgreSQL to clients,\n> supervisors or funding agencies. Suggestion: add Informix and MySQL and\n> drop BeagleSQL and MiniSQL.\n\nMySQL is *not* an RDBMS...our comparision chart compares RDBMSs...\n\n> 2. Post a schedule for future releases. This is important for those of\n> us who want to know when -- if ever -- we can start to consider\n> PostgreSQL as a solution for projects that require features that are not\n> yet part of PostgreSQL (e.g. replication), and when we should think\n> about upgrading our PostgreSQL installations. It is also crucial to let\n> prospective users know that Postgres is under active development. I\n> know there is a todo list somewhere but I think the schedule needs to be\n> more prominent on the web site.\n\n'scheduales' are *generally* accepted as being 3 months of development\nplus 1 of testing, so a 4 month release scheduale. More realistically,\nits slightly longer, with this one being the most \"out of sync\" yet, but\nalot of good came out of that, IMHO...\n\n > > 3. Fix the PostgreSQL user gallery (linked from\n> http://www.postgresql.org/helpus.html).\n\nWorking on it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 15:28:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "> Folks,\n> \n> Not sure if this is the right place to request this, but here are some\n> things I, as a satisfied user of PostgreSQL, would like to see done (and\n> I'd be glad to help where I can). All of these are just suggestions\n> geared to the care and feeding of the PostgreSQL user community.\n\nThese are all good ideas. The problem is getting someone to devote the\ntime to it. We normally focus on announcing features as they are\ncompleted, not tracking features and request counts. They would be of\nvalue, but we have to weigh the value against actual development time.\n\nIt would certainly be nice to have all the things you mention, but\nconsidering our time is limited, I think we are properly allocating the\ntime we have.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 14:31:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "Fred Wilson Horch <[email protected]> writes:\n> 2. Post a schedule for future releases. This is important for those of\n> us who want to know when -- if ever -- we can start to consider\n> PostgreSQL as a solution for projects that require features that are not\n> yet part of PostgreSQL (e.g. replication), and when we should think\n> about upgrading our PostgreSQL installations.\n\nThis requires a degree of community agreement about what to do next\nthat I doubt you will see coming to pass around here. Check the hackers\narchives from a couple weeks ago to observe my dismal failure at\ncreating a consensus on goals for 6.6 --- never mind further-out\nreleases.\n\nReality is that features get added when someone is sufficiently\nmotivated to do them (and doesn't have anything else he considers\nhigher priority). Many things are on people's to-do lists, but\nI think trying to make a schedule saying \"this feature will be in\nrelease such-and-such\" would be an exercise in wishful thinking.\nAt this point I doubt we could even say what will be in 6.6 with\nany great confidence.\n\n> 4. Provide a better feature request method.\n\nThis might be a worthwhile idea. Again, though, I think most of the\ndevelopers are driven by what they personally need and/or find\ninteresting to work on more than by the volume of requests for a\ngiven feature. What would be valuable would be the ready availability\nof the secondary documentation aspects you mention:\n\n> But I'd like to know how many people are requesting which\n> features, whether there is a work-around, if there is a documentation or\n> a terminology issue that causes people to continue to request features\n> that are already in PostgreSQL, and what people have decided to do\n\nsince that would (I hope) cut down repetition on the mailing lists.\n\n> 5. Install a bug tracking system.\n\nWe desperately need a better system than we have, IMHO; the visibility\nof bug status is just horrible. But finding the manpower to set up\na better system is a problem :-(\n\nSince some folks have mentioned possible sources of bug-tracking\nsystems, I'll suggest Mozilla's Bugzilla and related software as\nanother thing worth looking at, if anyone is feeling motivated to\ngo look...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 18:10:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out "
},
{
"msg_contents": "On Tue, 29 Jun 1999, Tom Lane wrote:\n\n> > 5. Install a bug tracking system.\n> \n> We desperately need a better system than we have, IMHO; the visibility\n> of bug status is just horrible. But finding the manpower to set up\n> a better system is a problem :-(\n> \n> Since some folks have mentioned possible sources of bug-tracking\n> systems, I'll suggest Mozilla's Bugzilla and related software as\n> another thing worth looking at, if anyone is feeling motivated to\n> go look...\n\nSaw that one, but it uses a MySQL backend, and, for some very odd reason,\nI'm not willing to install that on my servr :) Anyone want to look at\nwhat it would take to make use of PostgreSQL?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 21:00:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out "
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> MySQL is *not* an RDBMS...our comparision chart compares \n> RDBMSs...\n\nI don't know much about MySQL. Why isn't it a RDBMS?\n\nIn any case, if MySQL is lacking some features to qualify as an RDBMS,\nthen all the more reason to include it in the chart and say why!\nOtherwise people will use it without knowing.\n",
"msg_date": "Wed, 30 Jun 1999 10:03:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "On Tue, Jun 29, 1999 at 09:00:46PM -0300, The Hermit Hacker wrote:\n> On Tue, 29 Jun 1999, Tom Lane wrote:\n> \n> > > 5. Install a bug tracking system.\n> > \n> > We desperately need a better system than we have, IMHO; the visibility\n> > of bug status is just horrible. But finding the manpower to set up\n> > a better system is a problem :-(\n> > \n> > Since some folks have mentioned possible sources of bug-tracking\n> > systems, I'll suggest Mozilla's Bugzilla and related software as\n> > another thing worth looking at, if anyone is feeling motivated to\n> > go look...\n> \n> Saw that one, but it uses a MySQL backend, and, for some very odd reason,\n> I'm not willing to install that on my servr :) Anyone want to look at\n> what it would take to make use of PostgreSQL?\n\n\tI implemented (well, ported) the bug tracking system we use at\nBe. It is Apache/PHP/Postgres and seems to be working just fine with about\n22,000 records. I would be willing to modify it and set it up, but am\ncurrently lacking somewhat in bandwidth. I may be lacking in hardware\ndepending on the amount of traffic.\n\n",
"msg_date": "Tue, 29 Jun 1999 18:39:24 -0700",
"msg_from": "Adam Haberlach <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 29 Jun 1999, Tom Lane wrote:\n> \n> > > 5. Install a bug tracking system.\n> >\n\nWe've been using keystone (which I got from a reference on the old\npostgresql web-site ;-)) for a while and it is really quite neat. It\nruns on postgres and php. Only problem is that the web pages are very\nnice, but can get kind-of slow to load if you are only on the end of a\nvery slow line. Also, it isn't entirely free (only free for small\ngroups). It may of course be possible to come to some arrangement, as\nthey are using postgres and it is free advertising. url is\nhttp://www.stonekeep.com\n\nAdriaan\n",
"msg_date": "Wed, 30 Jun 1999 08:36:28 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "> I implemented (well, ported) the bug tracking system we use at\n> Be. It is Apache/PHP/Postgres and seems to be working just fine with about\n> 22,000 records. I would be willing to modify it and set it up, but am\n> currently lacking somewhat in bandwidth. I may be lacking in hardware\n> depending on the amount of traffic.\n\nPresumably the long-term hosting would be most conveniently done at\nhub.org (which hosts the Postgres project). scrappy has great\nbandwidth and the accessibility has (almost) always been very good,\neven if it *is* housed in some trapper's cabin in the Great White\nNorth...\n\nI'm sure that access (an account, etc) can be arranged once we settle\non the system to try first. Does the BeOS system have an external\ninterface we can look at, or is it only used in-house? I should point\nout that you're the first person to actually offer to do the work with\na concrete proposal, which is what we'll need to get anything going ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 30 Jun 1999 13:44:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
}
] |
[
{
"msg_contents": "Would a PostgreSQL / PHP solution be practical for the Feature/Bug tracking?\n(I'm thinking specifically of the mirrors here.)\n\t-DEJ\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tTuesday, June 29, 1999 1:31 PM\n> To:\tFred Wilson Horch\n> Cc:\[email protected]\n> Subject:\tRe: [HACKERS] User requests now that 6.5 is out\n> \n> > Folks,\n> > \n> > Not sure if this is the right place to request this, but here are some\n> > things I, as a satisfied user of PostgreSQL, would like to see done (and\n> > I'd be glad to help where I can). All of these are just suggestions\n> > geared to the care and feeding of the PostgreSQL user community.\n> \n> These are all good ideas. The problem is getting someone to devote the\n> time to it. We normally focus on announcing features as they are\n> completed, not tracking features and request counts. They would be of\n> value, but we have to weigh the value against actual development time.\n> \n> It would certainly be nice to have all the things you mention, but\n> considering our time is limited, I think we are properly allocating the\n> time we have.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 13:53:14 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "\nOn 29-Jun-99 Jackson, DeJuan wrote:\n> Would a PostgreSQL / PHP solution be practical for the Feature/Bug tracking?\n> (I'm thinking specifically of the mirrors here.)\n> -DEJ\n\nHow 'bout JitterBug? http://samba.anu.edu.au/jitterbug/\nor GNATS: http://www.cs.utah.edu/csinfo/texinfo/gnats/gnats.html\n\n\nVince.\n\n> \n>> -----Original Message-----\n>> From: Bruce Momjian [SMTP:[email protected]]\n>> Sent: Tuesday, June 29, 1999 1:31 PM\n>> To: Fred Wilson Horch\n>> Cc: [email protected]\n>> Subject: Re: [HACKERS] User requests now that 6.5 is out\n>> \n>> > Folks,\n>> > \n>> > Not sure if this is the right place to request this, but here are some\n>> > things I, as a satisfied user of PostgreSQL, would like to see done (and\n>> > I'd be glad to help where I can). All of these are just suggestions\n>> > geared to the care and feeding of the PostgreSQL user community.\n>> \n>> These are all good ideas. The problem is getting someone to devote the\n>> time to it. We normally focus on announcing features as they are\n>> completed, not tracking features and request counts. They would be of\n>> value, but we have to weigh the value against actual development time.\n>> \n>> It would certainly be nice to have all the things you mention, but\n>> considering our time is limited, I think we are properly allocating the\n>> time we have.\n>> \n>> -- \n>> Bruce Momjian | http://www.op.net/~candle\n>> [email protected] | (610) 853-3000\n>> + If your life is a hard drive, | 830 Blythe Avenue\n>> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 29 Jun 1999 15:19:24 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Vince Vielhaber wrote:\n\n> \n> On 29-Jun-99 Jackson, DeJuan wrote:\n> > Would a PostgreSQL / PHP solution be practical for the Feature/Bug tracking?\n> > (I'm thinking specifically of the mirrors here.)\n> > -DEJ\n> \n> How 'bout JitterBug? http://samba.anu.edu.au/jitterbug/\n> or GNATS: http://www.cs.utah.edu/csinfo/texinfo/gnats/gnats.html\n\nI've tried GNATs, and didn't really like it...its worked effectively at\nFreeBSD, but...\n\nOuch, JitterBug looks painful :( \n\nI'm willing to install either, but I think that GNATs, from what I'm used\nto of it, is the better one, since it allows for email based bug\nreports...\n > \n> \n> Vince.\n> \n> > \n> >> -----Original Message-----\n> >> From: Bruce Momjian [SMTP:[email protected]]\n> >> Sent: Tuesday, June 29, 1999 1:31 PM\n> >> To: Fred Wilson Horch\n> >> Cc: [email protected]\n> >> Subject: Re: [HACKERS] User requests now that 6.5 is out\n> >> \n> >> > Folks,\n> >> > \n> >> > Not sure if this is the right place to request this, but here are some\n> >> > things I, as a satisfied user of PostgreSQL, would like to see done (and\n> >> > I'd be glad to help where I can). All of these are just suggestions\n> >> > geared to the care and feeding of the PostgreSQL user community.\n> >> \n> >> These are all good ideas. The problem is getting someone to devote the\n> >> time to it. We normally focus on announcing features as they are\n> >> completed, not tracking features and request counts. They would be of\n> >> value, but we have to weigh the value against actual development time.\n> >> \n> >> It would certainly be nice to have all the things you mention, but\n> >> considering our time is limited, I think we are properly allocating the\n> >> time we have.\n> >> \n> >> -- \n> >> Bruce Momjian | http://www.op.net/~candle\n> >> [email protected] | (610) 853-3000\n> >> + If your life is a hard drive, | 830 Blythe Avenue\n> >> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 17:40:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> \n> I've tried GNATs, and didn't really like it...its worked effectively at\n> FreeBSD, but...\n> \n> Ouch, JitterBug looks painful :(\n> \n> I'm willing to install either, but I think that GNATs, from what I'm used\n> to of it, is the better one, since it allows for email based bug\n> reports...\n\nSo does JitterBug.\n\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n",
"msg_date": "Tue, 29 Jun 1999 17:11:02 -0400",
"msg_from": "\"Mark Hollomon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "Mark Hollomon wrote:\n> \n> The Hermit Hacker wrote:\n> >\n> >\n> > I've tried GNATs, and didn't really like it...its worked effectively at\n> > FreeBSD, but...\n> >\n> > Ouch, JitterBug looks painful :(\n> >\n> > I'm willing to install either, but I think that GNATs, from what I'm used\n> > to of it, is the better one, since it allows for email based bug\n> > reports...\n> \n> So does JitterBug.\n> \n\nHow about W3PDB? http://www.bawue.de/~mergl/export/w3pdb-0.20.tar.gz\n\nIt was written by Edmund Mergl and uses PostgreSQL + Apache to provide\na GNATS-like database enabled problem tracking system. I'm using \nWWWGNATS which is very dated but was the best Open Source option 5 \nyears or so ago. I looked at moving from GNATS to W3PDB but haven't\nhad the time. Looked promising though.\n\nThere's also bugzilla. http://bugzilla.mozilla.org/\n\nMikE\n",
"msg_date": "Tue, 29 Jun 1999 18:18:54 -0700",
"msg_from": "Mike Embry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
}
] |
[
{
"msg_contents": "> > On 29-Jun-99 Jackson, DeJuan wrote:\n> > > Would a PostgreSQL / PHP solution be practical for the Feature/Bug\n> tracking?\n> > > (I'm thinking specifically of the mirrors here.)\n> > > -DEJ\n> > \n> > How 'bout JitterBug? http://samba.anu.edu.au/jitterbug/\n> > or GNATS: http://www.cs.utah.edu/csinfo/texinfo/gnats/gnats.html\n> \n> I've tried GNATs, and didn't really like it...its worked effectively at\n> FreeBSD, but...\n> \n> Ouch, JitterBug looks painful :( \n> \n> I'm willing to install either, but I think that GNATs, from what I'm used\n> to of it, is the better one, since it allows for email based bug\n> reports...\n> \n\tWill it allow for feature requests as well as bug reports/updates?\n\tAnd how will this affect the mirrors? Do they have to install the\nsame software? Will they get static bug report pages that they update once\na day?\n\tI have no clue how the mirroring works.\n> > >> > Folks,\n> > >> > \n> > >> > Not sure if this is the right place to request this, but here are\n> some\n> > >> > things I, as a satisfied user of PostgreSQL, would like to see done\n> (and\n> > >> > I'd be glad to help where I can). All of these are just\n> suggestions\n> > >> > geared to the care and feeding of the PostgreSQL user community.\n> > >> \n> > >> These are all good ideas. The problem is getting someone to devote\n> the\n> > >> time to it. We normally focus on announcing features as they are\n> > >> completed, not tracking features and request counts. They would be\n> of\n> > >> value, but we have to weigh the value against actual development\n> time.\n> > >> \n> > >> It would certainly be nice to have all the things you mention, but\n> > >> considering our time is limited, I think we are properly allocating\n> the\n> > >> time we have.\n> \n",
"msg_date": "Tue, 29 Jun 1999 15:51:24 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Jackson, DeJuan wrote:\n\n> > > On 29-Jun-99 Jackson, DeJuan wrote:\n> > > > Would a PostgreSQL / PHP solution be practical for the Feature/Bug\n> > tracking?\n> > > > (I'm thinking specifically of the mirrors here.)\n> > > > -DEJ\n> > > \n> > > How 'bout JitterBug? http://samba.anu.edu.au/jitterbug/\n> > > or GNATS: http://www.cs.utah.edu/csinfo/texinfo/gnats/gnats.html\n> > \n> > I've tried GNATs, and didn't really like it...its worked effectively at\n> > FreeBSD, but...\n> > \n> > Ouch, JitterBug looks painful :( \n> > \n> > I'm willing to install either, but I think that GNATs, from what I'm used\n> > to of it, is the better one, since it allows for email based bug\n> > reports...\n> > \n> \tWill it allow for feature requests as well as bug reports/updates?\n> \tAnd how will this affect the mirrors? Do they have to install the\n> same software? Will they get static bug report pages that they update once\n> a day?\n> \tI have no clue how the mirroring works.\n\nThe mirrors are pretty much only those that are 'static'...anything using\na database backend (or similar...ie. ht/Dig) are purely on the main\nsite...\n\n > > > >> > Folks,\n> > > >> > \n> > > >> > Not sure if this is the right place to request this, but here are\n> > some\n> > > >> > things I, as a satisfied user of PostgreSQL, would like to see done\n> > (and\n> > > >> > I'd be glad to help where I can). All of these are just\n> > suggestions\n> > > >> > geared to the care and feeding of the PostgreSQL user community.\n> > > >> \n> > > >> These are all good ideas. The problem is getting someone to devote\n> > the\n> > > >> time to it. We normally focus on announcing features as they are\n> > > >> completed, not tracking features and request counts. They would be\n> > of\n> > > >> value, but we have to weigh the value against actual development\n> > time.\n> > > >> \n> > > >> It would certainly be nice to have all the things you mention, but\n> > > >> considering our time is limited, I think we are properly allocating\n> > the\n> > > >> time we have.\n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 18:15:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "Hi,\n\nHow about Bugs, Features and Doc Requests Collector from\nZope progect?\n\thttp://www.zope.org/Collector/\nMikhail\n",
"msg_date": "Tue, 29 Jun 1999 17:38:44 -0400",
"msg_from": "Mikhail Terekhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
},
{
"msg_contents": "Hi!\n\nOn Tue, 29 Jun 1999, Mikhail Terekhov wrote:\n> How about Bugs, Features and Doc Requests Collector from\n> Zope progect?\n> \thttp://www.zope.org/Collector/\n\n Collector is based on ZTable, which is commercial. DigiCool announced\nthey will open ZTable Core sometime in the future, probably Collector will\nbe open too.\n\n> Mikhail\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 30 Jun 1999 12:01:41 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] User requests now that 6.5 is out"
}
] |
[
{
"msg_contents": "\nJust curious, but how do ppl handle running two different versions on the\nsame machine? I want to start up a v6.5 server where v6.4.2 server\nalready exists...\n\nI'm wondering if it might make sense to up the libpq on a release, so that\nthis can work?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 17:55:18 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "changing major/minor on libpq for releases ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just curious, but how do ppl handle running two different versions on the\n> same machine? I want to start up a v6.5 server where v6.4.2 server\n> already exists...\n\nRun them with different port addresses. I do it all the time...\n\n> I'm wondering if it might make sense to up the libpq on a release, so that\n> this can work?\n\nNo, I don't think that's a good idea. There's no reason to break binary\ncompatibility of libpq when setting PGPORT will do the job; the same\npsql binary (or any other application) can talk to either release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 1999 17:21:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] changing major/minor on libpq for releases ... "
}
] |
[
{
"msg_contents": "Hi Mike.\n\nGet that job in Ottawa?\n\nAnyway, version 6.5 apparently supports hot backups by using MVCC\nto give a view of a consistent version of the database during a\npg_dump (http://postgresql.nextpath.com/doxlist.html search for 'backup')\n\nWhat you explain below is basically version-controlled additions anyway.\n\nAnyone correct me if I'm wrong...\n\nDuane\n\n> Hi.\n> I've been mulling around a lot with this idea. I've looked around a bit\n> for info on being able to do hot backups on a running database, but there\n> isn't a lot of info available. The problem with just pg_dumping the data\n> is that it doesn't work well with large databases that are expected to be\n> processing transactions during the backup time period.\n> \n> Dropping postgres down to a select-only lock level on all databases at\n> once was my thought. In order to keep the system running hot, you'd have\n> to set a flag to say that database is being backed up. My idea is to allow\n> a special directory where the deltas are written. IE: Someone inserts a\n> record, it would need to write that page to a file in the temp dir for\n> both the table, and its indexes. Then, when a select is run, it would have\n> to first check the delta table files, then the real indexes for the page\n> it's looking for.\n> \n> This way, you could guarantee that the files being backed up would not be\n> altered in any way during the backup, and the deltas would be the only\n> overhead. Using the hole in file feature, I think that page changes could\n> be added to the file without making to too large, but I've not looked\n> closely on how indexes are physically stored to see this. I suppose the NT\n> port would require double the size of the database to do this, since I\n> don't think winblows supports holes in a file.\n> \n> With the database in select-only mode, someone could either do a pg_dump\n> style backup, or backup the actual tables. I am guessing that it's more of\n> a restore time / backup size tradeoff with each backup method.\n> \n> One reason I am looking at this (a possible 6.6 feature?) is that we are\n> using postgresql for a classifieds database which will replace a\n> SQL-Server. The database will easily be in the 10's of gigabytes range\n> with a few million items. I will of course need to backup this beast\n> without preventing the clients from adding things.\n> \n> If someone can point me in the right direction, I can attempt to make it\n> work and submit a pile 'o patches againt 6.5.\n> \n> Comments? \n> \n> -Michael\n> \n> \n\n",
"msg_date": "Tue, 29 Jun 1999 17:58:26 -0300 (ADT)",
"msg_from": "Duane Currie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "Hi.\nI've been mulling around a lot with this idea. I've looked around a bit\nfor info on being able to do hot backups on a running database, but there\nisn't a lot of info available. The problem with just pg_dumping the data\nis that it doesn't work well with large databases that are expected to be\nprocessing transactions during the backup time period.\n\nDropping postgres down to a select-only lock level on all databases at\nonce was my thought. In order to keep the system running hot, you'd have\nto set a flag to say that database is being backed up. My idea is to allow\na special directory where the deltas are written. IE: Someone inserts a\nrecord, it would need to write that page to a file in the temp dir for\nboth the table, and its indexes. Then, when a select is run, it would have\nto first check the delta table files, then the real indexes for the page\nit's looking for.\n\nThis way, you could guarantee that the files being backed up would not be\naltered in any way during the backup, and the deltas would be the only\noverhead. Using the hole in file feature, I think that page changes could\nbe added to the file without making to too large, but I've not looked\nclosely on how indexes are physically stored to see this. I suppose the NT\nport would require double the size of the database to do this, since I\ndon't think winblows supports holes in a file.\n\nWith the database in select-only mode, someone could either do a pg_dump\nstyle backup, or backup the actual tables. I am guessing that it's more of\na restore time / backup size tradeoff with each backup method.\n\nOne reason I am looking at this (a possible 6.6 feature?) is that we are\nusing postgresql for a classifieds database which will replace a\nSQL-Server. The database will easily be in the 10's of gigabytes range\nwith a few million items. I will of course need to backup this beast\nwithout preventing the clients from adding things.\n\nIf someone can point me in the right direction, I can attempt to make it\nwork and submit a pile 'o patches againt 6.5.\n\nComments? \n\n-Michael\n\n",
"msg_date": "Tue, 29 Jun 1999 19:53:00 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": false,
"msg_subject": "Hot Backup Ability"
},
{
"msg_contents": "\nHot backups were added in 6.5.\n\n\n> Hi.\n> I've been mulling around a lot with this idea. I've looked around a bit\n> for info on being able to do hot backups on a running database, but there\n> isn't a lot of info available. The problem with just pg_dumping the data\n> is that it doesn't work well with large databases that are expected to be\n> processing transactions during the backup time period.\n> \n> Dropping postgres down to a select-only lock level on all databases at\n> once was my thought. In order to keep the system running hot, you'd have\n> to set a flag to say that database is being backed up. My idea is to allow\n> a special directory where the deltas are written. IE: Someone inserts a\n> record, it would need to write that page to a file in the temp dir for\n> both the table, and its indexes. Then, when a select is run, it would have\n> to first check the delta table files, then the real indexes for the page\n> it's looking for.\n> \n> This way, you could guarantee that the files being backed up would not be\n> altered in any way during the backup, and the deltas would be the only\n> overhead. Using the hole in file feature, I think that page changes could\n> be added to the file without making to too large, but I've not looked\n> closely on how indexes are physically stored to see this. I suppose the NT\n> port would require double the size of the database to do this, since I\n> don't think winblows supports holes in a file.\n> \n> With the database in select-only mode, someone could either do a pg_dump\n> style backup, or backup the actual tables. I am guessing that it's more of\n> a restore time / backup size tradeoff with each backup method.\n> \n> One reason I am looking at this (a possible 6.6 feature?) is that we are\n> using postgresql for a classifieds database which will replace a\n> SQL-Server. The database will easily be in the 10's of gigabytes range\n> with a few million items. I will of course need to backup this beast\n> without preventing the clients from adding things.\n> \n> If someone can point me in the right direction, I can attempt to make it\n> work and submit a pile 'o patches againt 6.5.\n> \n> Comments? \n> \n> -Michael\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 19:36:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Duane Currie wrote:\n\n> Anyway, version 6.5 apparently supports hot backups by using MVCC\n> to give a view of a consistent version of the database during a\n> pg_dump (http://postgresql.nextpath.com/doxlist.html search for\n'backup')\nHrm. Nothing pops out.\n \nJust out of curiosity, I did a DUMP on the database while running a script\nthat ran a pile of updates. When I restored the database files, it was so\ncorrupted that I couldn't even run a select. vacuum just core dumped...\n \nIf I can just run pg_dump to back it up, how does this guarantee any sort\nof referential integrity? Also during such a dump, it seems that things \nblock while waiting for a lock. This also happens during a pg_vacuum. I \nthought that mvcc was supposed to stop this...\n \n-Michael\n \n\n",
"msg_date": "Tue, 29 Jun 1999 21:46:21 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "> On Tue, 29 Jun 1999, Duane Currie wrote:\n> \n> > Anyway, version 6.5 apparently supports hot backups by using MVCC\n> > to give a view of a consistent version of the database during a\n> > pg_dump (http://postgresql.nextpath.com/doxlist.html search for\n> 'backup')\n> Hrm. Nothing pops out.\n> \n> Just out of curiosity, I did a DUMP on the database while running a script\n> that ran a pile of updates. When I restored the database files, it was so\n> corrupted that I couldn't even run a select. vacuum just core dumped...\n\nWhen you say DUMP, you mean pg_dump, right? Are you using 6.5?\n\n> If I can just run pg_dump to back it up, how does this guarantee any sort\n> of referential integrity? Also during such a dump, it seems that things \n> block while waiting for a lock. This also happens during a pg_vacuum. I \n> thought that mvcc was supposed to stop this...\n\nOK, sounds like you are using pg_dump, and 6.5. pg_vacuum still blocks,\nbut pg_dump shouldn't. This sounds unusual. You should have gotten\neverything at the time of the _start_ of the dump.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 21:09:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Duane Currie wrote:\n\n> Anyway, version 6.5 apparently supports hot backups by using MVCC\n> to give a view of a consistent version of the database during a\n> pg_dump (http://postgresql.nextpath.com/doxlist.html search for 'backup')\n\nThe string \"backup\" does not appear on that page, or on the \"Documentation\"\npage linked off of there. The section on backups in the integrated manual\ndoesn't say anything about MVCC.\n\nDid any MVCC docs ever get written?\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n \"There is no spoon.\"\n\n",
"msg_date": "Tue, 29 Jun 1999 22:10:41 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "Todd Graham Lewis wrote:\n> \n> On Tue, 29 Jun 1999, Duane Currie wrote:\n> \n> > Anyway, version 6.5 apparently supports hot backups by using MVCC\n> > to give a view of a consistent version of the database during a\n> > pg_dump (http://postgresql.nextpath.com/doxlist.html search for 'backup')\n> \n> The string \"backup\" does not appear on that page, or on the \"Documentation\"\n> page linked off of there. The section on backups in the integrated manual\n> doesn't say anything about MVCC.\n> \n> Did any MVCC docs ever get written?\n\nhttp://postgresql.nextpath.com/docs/user/index.html:\n\n10. Multi-Version Concurrency Control\n\nVadim\n",
"msg_date": "Wed, 30 Jun 1999 10:28:46 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Vadim Mikheev wrote:\n\n> > Did any MVCC docs ever get written?\n> \n> http://postgresql.nextpath.com/docs/user/index.html:\n> \n> 10. Multi-Version Concurrency Control\n\nVery groovy!\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n \"There is no spoon.\"\n\n",
"msg_date": "Tue, 29 Jun 1999 22:31:18 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "On Tue, 29 Jun 1999, Bruce Momjian wrote:\n\n> > Just out of curiosity, I did a DUMP on the database while running a script\n> > that ran a pile of updates. When I restored the database files, it was so\n> > corrupted that I couldn't even run a select. vacuum just core dumped...\n> \n> When you say DUMP, you mean pg_dump, right? Are you using 6.5?\n\nErm. Well, no. I was running ufsdump. Once I read the section on mvcc and\nre-did the test with the pg_dump, I realised that it does work as\ndocumented...\n\nI should think this is a good feature to broadcast to everyone. I don't\nthink other free systems support it.\n\nThe thing I got confuzed with that blocked transactions was the pg_vacuum.\nSeeing as how it physically re-arranges data inside the tables and\nindexes, is there any hope for not blocking the table for a long time as\nit re-arranges a 15 gig table?\n\nWill re-usable page support (whenever it is expected) eliminate the need\nfor vacuum?\n\nWould it be easy to come up with a scheme for the vacuum function defrag a\nset number of pages and such, release its locks if there is another\nprocess blocked and waiting, then resume after that process is finished?\n\n-Michael\n\n",
"msg_date": "Wed, 30 Jun 1999 01:55:24 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "> On Tue, 29 Jun 1999, Bruce Momjian wrote:\n> \n> > > Just out of curiosity, I did a DUMP on the database while running a script\n> > > that ran a pile of updates. When I restored the database files, it was so\n> > > corrupted that I couldn't even run a select. vacuum just core dumped...\n> > \n> > When you say DUMP, you mean pg_dump, right? Are you using 6.5?\n> \n> Erm. Well, no. I was running ufsdump. Once I read the section on mvcc and\n> re-did the test with the pg_dump, I realised that it does work as\n> documented...\n\n\nWoh! Not a good idea. We can't get a proper snapshot if the ufs blocks\nare moving around while we are doing the backup. We need pg_dump.\n\nGlad it worked when you did it with pg_dump.\n\n> I should think this is a good feature to broadcast to everyone. I don't\n> think other free systems support it.\n\nProbably not. We have it as one of our main items in the release notes,\nand on the web page describing the release. We need people like you to\ntell others about it.\n\n> \n> The thing I got confuzed with that blocked transactions was the pg_vacuum.\n> Seeing as how it physically re-arranges data inside the tables and\n> indexes, is there any hope for not blocking the table for a long time as\n> it re-arranges a 15 gig table?\n\nNot really. In fact, it even shrinks the table to give back free space.\nThe 6.5 pg_vacuum is much faster than earlier versions, but on a 15gig\ntable, it is going to take some time.\n\nSome day, it would be nice to allow re-use of expired rows without\nvacuum. It is on our TODO list.\n\n> Will re-usable page support (whenever it is expected) eliminate the need\n> for vacuum?\n\nIt will allow you to vacuum less frequently, and perhaps never if you\ndon't want space back from expired rows.\n\n> Would it be easy to come up with a scheme for the vacuum function defrag a\n> set number of pages and such, release its locks if there is another\n> process blocked and waiting, then resume after that process is finished?\n\nThat is a very nice idea. We could just release and reaquire the lock,\nknowing that if there is someone waiting, they would get the lock. \nMaybe someone can comment on this?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 Jun 1999 01:15:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "At 09:09 PM 6/29/99 -0400, Bruce Momjian wrote:\n \n>> Just out of curiosity, I did a DUMP on the database while running a script\n>> that ran a pile of updates. When I restored the database files, it was so\n>> corrupted that I couldn't even run a select. vacuum just core dumped...\n\n>When you say DUMP, you mean pg_dump, right? Are you using 6.5?\n\nIn his first note, he was proposing a scheme that would allow either\nfilesystem dumps or pg_dumps, which I think a couple of respondents\nmissed.\n\nSo I suspect he means a filesystem dump in this case. Which of course\nwon't work in postgres, or in Oracle.\n>\n>\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 29 Jun 1999 23:09:02 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "Oops... sorry guys... forgot about the frames.\n\nThe doc I was referring to was:\nhttp://www.postgresql.org/docs/admin/release.htm\n\nDuane\n\n> On Tue, 29 Jun 1999, Duane Currie wrote:\n> \n> > Anyway, version 6.5 apparently supports hot backups by using MVCC\n> > to give a view of a consistent version of the database during a\n> > pg_dump (http://postgresql.nextpath.com/doxlist.html search for 'backup')\n> \n> The string \"backup\" does not appear on that page, or on the \"Documentation\"\n> page linked off of there. The section on backups in the integrated manual\n> doesn't say anything about MVCC.\n> \n> Did any MVCC docs ever get written?\n> \n> --\n> Todd Graham Lewis Postmaster, MindSpring Enterprises\n> [email protected] (800) 719-4664, x22804\n> \n> \"There is no spoon.\"\n> \n\n",
"msg_date": "Wed, 30 Jun 1999 09:20:27 +0000 (AST)",
"msg_from": "Duane Currie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hot Backup Ability"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Bruce Momjian wrote:\n\n> > Would it be easy to come up with a scheme for the vacuum function defrag a\n> > set number of pages and such, release its locks if there is another\n> > process blocked and waiting, then resume after that process is finished?\n> \n> That is a very nice idea. We could just release and reaquire the lock,\n> knowing that if there is someone waiting, they would get the lock. \n> Maybe someone can comment on this?\n\nMy first thought is \"doesn't this still require the 'page-reusing'\nfunctionality to exist\"? Which virtually eliminates the problem...\n\nIf not, then why can't something be done where this is transparent\naltogther? Have some sort of mechanism that keeps track of \"dead\nspace\"...a trigger that says after X tuples have been deleted, do an\nautomatic vacuum of the database?\n\nThe automatic vacuum would be done in a way similar to Michael's\nsuggestion above...scan through for the first 'dead space', lock the table\nfor a short period of time and \"move records up\". How many tuples could\nyou move in a very short period of time, such that it is virtually\ntransparent to end-users?\n\nAs a table gets larger and larger, a few 'dead tuples' aren't going to\nmake much of a different in performance, so make the threshold some\npercentage of the size of the table, so at it grows, the number of 'dead\ntuples' has to be larger...\n\nAnd leave out the truncate at the end...\n\nThe 'manual vacuum' would still need to be run periodically, for the\ntruncate and for stats...\n\nJust a thought...:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 30 Jun 1999 09:40:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Vaccum (Was: Re: [HACKERS] Hot Backup Ability)"
},
{
"msg_contents": "On Wed, 30 Jun 1999, The Hermit Hacker wrote:\n\n> On Wed, 30 Jun 1999, Bruce Momjian wrote:\n> \n> > > Would it be easy to come up with a scheme for the vacuum function defrag a\n> > > set number of pages and such, release its locks if there is another\n> > > process blocked and waiting, then resume after that process is finished?\n> > \n> > That is a very nice idea. We could just release and reaquire the lock,\n> > knowing that if there is someone waiting, they would get the lock. \n> > Maybe someone can comment on this?\n> \n> My first thought is \"doesn't this still require the 'page-reusing'\n> functionality to exist\"? Which virtually eliminates the problem...\n> \n> If not, then why can't something be done where this is transparent\n> altogther? Have some sort of mechanism that keeps track of \"dead\n> space\"...a trigger that says after X tuples have been deleted, do an\n> automatic vacuum of the database?\n> \n> The automatic vacuum would be done in a way similar to Michael's\n> suggestion above...scan through for the first 'dead space', lock the table\n> for a short period of time and \"move records up\". How many tuples could\n> you move in a very short period of time, such that it is virtually\n> transparent to end-users?\n> \n> As a table gets larger and larger, a few 'dead tuples' aren't going to\n> make much of a different in performance, so make the threshold some\n> percentage of the size of the table, so at it grows, the number of 'dead\n> tuples' has to be larger...\n> \n> And leave out the truncate at the end...\n> \n> The 'manual vacuum' would still need to be run periodically, for the\n> truncate and for stats...\n> \n> Just a thought...:)\n\nWhy not one step further. Constant background vacuuming. Do away with\nthe need to vacuum altogether. Have something in the backend always \nscanning for dead tuples/dead space and when it finds some it can lock-\nmove-unlock as it goes. This way it's not working with a set number or\nlooking for a certain threshold, just a constant maintenance process.\n\nNo?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 10:27:35 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)"
}
] |
[
{
"msg_contents": "Normally I just set up a specially designated user and set all of his\nenvironment variable accordingly.\nI don't use the Perl interface or any interface that requires compiling as\nroot.\nI can see those being a real problem without libpq versioning.\n\t-DEJ\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tTuesday, June 29, 1999 3:55 PM\n> To:\[email protected]\n> Subject:\t[HACKERS] changing major/minor on libpq for releases ...\n> \n> \n> Just curious, but how do ppl handle running two different versions on the\n> same machine? I want to start up a v6.5 server where v6.4.2 server\n> already exists...\n> \n> I'm wondering if it might make sense to up the libpq on a release, so that\n> this can work?\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n> \n",
"msg_date": "Tue, 29 Jun 1999 16:32:26 -0500",
"msg_from": "\"Jackson, DeJuan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] changing major/minor on libpq for releases ..."
},
{
"msg_contents": "On Tue, 29 Jun 1999, Jackson, DeJuan wrote:\n\n> Normally I just set up a specially designated user and set all of his\n> environment variable accordingly.\n> I don't use the Perl interface or any interface that requires compiling as\n> root.\n> I can see those being a real problem without libpq versioning.\n\nThat's the problem taht I have...all my stuff is in perl, but I only want\nto move things over one by one...\n\nOh well, bite the bullet and move them all, ig uess :)\n\n> \t-DEJ\n> \n> > -----Original Message-----\n> > From:\tThe Hermit Hacker [SMTP:[email protected]]\n> > Sent:\tTuesday, June 29, 1999 3:55 PM\n> > To:\[email protected]\n> > Subject:\t[HACKERS] changing major/minor on libpq for releases ...\n> > \n> > \n> > Just curious, but how do ppl handle running two different versions on the\n> > same machine? I want to start up a v6.5 server where v6.4.2 server\n> > already exists...\n> > \n> > I'm wondering if it might make sense to up the libpq on a release, so that\n> > this can work?\n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick:\n> > Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org \n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 20:59:04 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] changing major/minor on libpq for releases ..."
}
] |
[
{
"msg_contents": "\nI don't know what SQL standards say about this, but in:\n\ntacacs=> SELECT * FROM users,counters WHERE users.username=counters.username AND\nusername='foobar';\nERROR: Column 'username' is ambiguous\n\n...username is NOT ambiguous...\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Wed, 30 Jun 1999 00:41:29 +0200",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Little Suggestion for 6.6"
},
{
"msg_contents": "> \n> I don't know what SQL standards say about this, but in:\n> \n> tacacs=> SELECT * FROM users,counters WHERE\n> users.username=counters.username AND username='foobar'; ERROR:\n> Column 'username' is ambiguous\n> \n> ...username is NOT ambiguous...\n\nI don't believe you.\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 Jun 1999 19:35:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Little Suggestion for 6.6"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Daniele Orlandi wrote:\n\n> \n> I don't know what SQL standards say about this, but in:\n> \n> tacacs=> SELECT * FROM users,counters WHERE users.username=counters.username AND\n> username='foobar';\n> ERROR: Column 'username' is ambiguous\n> \n> ...username is NOT ambiguous...\n\nOf course it is...which username do you want to match on, users.username\nor counters.username?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 20:56:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Little Suggestion for 6.6"
},
{
"msg_contents": "At 08:56 PM 6/29/99 -0300, The Hermit Hacker wrote:\n>On Wed, 30 Jun 1999, Daniele Orlandi wrote:\n>\n>> \n>> I don't know what SQL standards say about this, but in:\n>> \n>> tacacs=> SELECT * FROM users,counters WHERE\nusers.username=counters.username AND\n>> username='foobar';\n>> ERROR: Column 'username' is ambiguous\n>> \n>> ...username is NOT ambiguous...\n\n>Of course it is...which username do you want to match on, users.username\n>or counters.username?\n\nHe's saying that the expression can be resolved because \ntheir values are equal, so it doesn't matter which username\nyou match on. \n\nWhich means he thinks that expression semantics rather than\nscoping/parsing/type rules ought to determine what is\nand what is not an ambiguous expression. Which is...well...\njust wrong :)\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Tue, 29 Jun 1999 17:24:36 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Little Suggestion for 6.6"
},
{
"msg_contents": "\n\nThe Hermit Hacker wrote:\n> \n> > tacacs=> SELECT * FROM users,counters WHERE users.username=counters.username \n> > AND username='foobar';\n> > ERROR: Column 'username' is ambiguous\n> >\n> > ...username is NOT ambiguous...\n> \n> Of course it is...which username do you want to match on, users.username\n> or counters.username?\n\nIMHO it's the same, since they have to be equal...\n\nPardon me if I'm not seeing something obvious :^)\n\nBye.\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Wed, 30 Jun 1999 02:24:38 +0200",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Little Suggestion for 6.6"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Daniele Orlandi wrote:\n\n> \n> \n> The Hermit Hacker wrote:\n> > \n> > > tacacs=> SELECT * FROM users,counters WHERE users.username=counters.username \n> > > AND username='foobar';\n> > > ERROR: Column 'username' is ambiguous\n> > >\n> > > ...username is NOT ambiguous...\n> > \n> > Of course it is...which username do you want to match on, users.username\n> > or counters.username?\n> \n> IMHO it's the same, since they have to be equal...\n> \n> Pardon me if I'm not seeing something obvious :^)\n\nIf I understand things (which could be pushing it), the join that is being\nattempted (simplistically) gets broken down as something like:\n\na) find all usernames in users that exist in counters\nb) find all usernames in ?? that equals foobar\nc) find all a AND b\n\nVery simplistic, mind you...\n\nIN the above sample, b can't be resolved, since you don't tell it which\ntable to search for the username...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Jun 1999 23:06:04 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Little Suggestion for 6.6"
}
] |
[
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> > We certainly should think about a general speedup of NUMERIC.\n>>\n>> How would we do that? I assumed it was already pretty optimized.\n>\n> The speedup (I expect) results from the fact that the inner\n> loops of add, subtract and multiply will then handle 4\n> decimal digits per cycle instead of one! Doing a\n>\n> 1234.5678 + 2345.6789\n>\n> then needs 2 internal cycles instead of 8. And\n>\n> 100.123 + 12030.12345\n>\n> needs 4 cycles instead of 10 (because the decimal point has\n> the same meaning in base 10000 the last value is stored\n> internally as short ints 1, 2030, 1234, 5000). This is the\n> worst case and it still saved 60% of the innermost cycles!\n\nThe question, though, becomes what percentage of operations on a \nNUMERIC field are arithmetic, and what percentage are storage/retrieval.\n\nFor databases that simply store/retrieve data, your \"optimization\" will have\nthe effect of significantly increasing format conversion overhead. With a\n512-byte table, four packed-decimal digits can be converted in two\nprimitive operations, but base-10000 will require three divisions, \nthree subtractions, four additions, plus miscellaneous data shuffling.\n\n\t-Michael Robinson\n\n",
"msg_date": "Wed, 30 Jun 1999 10:23:22 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Michael Robinson <[email protected]> writes:\n> The question, though, becomes what percentage of operations on a \n> NUMERIC field are arithmetic, and what percentage are storage/retrieval.\n\nGood point.\n\n> For databases that simply store/retrieve data, your \"optimization\" will have\n> the effect of significantly increasing format conversion overhead. With a\n> 512-byte table, four packed-decimal digits can be converted in two\n> primitive operations, but base-10000 will require three divisions, \n> three subtractions, four additions, plus miscellaneous data shuffling.\n\nThat is something to worry about, but I think the present implementation\nunpacks the storage format into calculation format before converting\nto text. Getting rid of the unpack step by making storage and calc\nformats the same would probably buy enough speed to pay for the extra\nconversion arithmetic.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 1999 10:33:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time "
},
{
"msg_contents": "> > The question, though, becomes what percentage of operations on a\n> > NUMERIC field are arithmetic, and what percentage are storage/retrieval.\n> Good point.\n\nWe assume that most data stays inside the database on every query.\nThat is, one should optimize for comparison/calculation speed, not\nformatting speed. If you are comparing a bunch of rows to return one,\nyou will be much happier if the comparison happens quickly, as opposed\nto doing that slowly but formatting the single output value quickly.\nAn RDBMS can't really try to optimize for the opposite case, since\nthat isn't how it is usually used...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 30 Jun 1999 15:22:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> Michael Robinson <[email protected]> writes:\n> > The question, though, becomes what percentage of operations on a\n> > NUMERIC field are arithmetic, and what percentage are storage/retrieval.\n>\n> Good point.\n>\n> > For databases that simply store/retrieve data, your \"optimization\" will have\n> > the effect of significantly increasing format conversion overhead. With a\n> > 512-byte table, four packed-decimal digits can be converted in two\n> > primitive operations, but base-10000 will require three divisions,\n> > three subtractions, four additions, plus miscellaneous data shuffling.\n>\n> That is something to worry about, but I think the present implementation\n> unpacks the storage format into calculation format before converting\n> to text. Getting rid of the unpack step by making storage and calc\n> formats the same would probably buy enough speed to pay for the extra\n> conversion arithmetic.\n\n What I'm actually wondering about is why the hell using\n NUMERIC data type for fields where the database shouldn't\n calculate on. Why not using TEXT in that case?\n\n OTOH, I don't think that the format conversion base 10000->10\n overhead will be that significant compared against what in\n summary must happen until one tuple is ready to get sent to\n the frontend. Then, ALL our output functions allocate memory\n for the string representation and at least copy the result to\n there. How many arithmetic operations are performed\n internally to create the output of an int4 or float8 via\n sprintf()?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 1 Jul 1999 02:26:57 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> What I'm actually wondering about is why the hell using\n> NUMERIC data type for fields where the database shouldn't\n> calculate on. Why not using TEXT in that case?\n\nHe didn't say his application would be *all* I/O; he was just concerned\nabout whether the change would be a net loss if he did more I/O than\ncalculation. Seems like a reasonable concern to me.\n\n> OTOH, I don't think that the format conversion base 10000->10\n> overhead will be that significant compared against what in\n> summary must happen until one tuple is ready to get sent to\n> the frontend.\n\nI agree, but it's still good if you can avoid slowing it down.\n\nMeanwhile, I'd still like to see the runtime of the 'numeric'\nregression test brought down to something comparable to one\nof the other regression tests. How about cutting the precision\nit uses from (300,100) down to something sane, like say (30,10)?\nI do not believe for a moment that there are any portability bugs\nthat will be uncovered by the 300-digit case but not by a 30-digit\ncase.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 1999 20:45:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> > The question, though, becomes what percentage of operations on a\n>> > NUMERIC field are arithmetic, and what percentage are storage/retrieval.\n>> Good point.\n>We assume that most data stays inside the database on every query.\n>That is, one should optimize for comparison/calculation speed, not\n>formatting speed. If you are comparing a bunch of rows to return one,\n>you will be much happier if the comparison happens quickly, as opposed\n>to doing that slowly but formatting the single output value quickly.\n>An RDBMS can't really try to optimize for the opposite case, since\n>that isn't how it is usually used...\n\nThe optimizations under discussion will not significantly affect comparison\nspeed one way or the other, so comparison speed is a moot issue.\n\nThe question, really, is how often do you do this:\n\n select bignum from table where key = condition\n\nversus this:\n\n select bignum1/bignum2 from table where key = condition\n\nor this:\n\n select * from table where bignum1/bignum2 = condition\n\n\n\t-Michael Robinson\n\n",
"msg_date": "Thu, 1 Jul 1999 10:52:52 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "> I do not believe for a moment that there are any portability bugs\n> that will be uncovered by the 300-digit case but not by a 30-digit\n> case.\n\nYeah, just gratuitous showmanship ;)\n\nAnd think about those poor 486 machines. Maybe Jan is trying to burn\nthem out so they get replaced with something reasonable...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 01 Jul 1999 03:26:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time"
},
{
"msg_contents": "Michael Robinson <[email protected]> writes:\n> Thomas Lockhart <[email protected]> writes:\n>> We assume that most data stays inside the database on every query.\n>> That is, one should optimize for comparison/calculation speed, not\n>> formatting speed. If you are comparing a bunch of rows to return one,\n>> you will be much happier if the comparison happens quickly, as opposed\n>> to doing that slowly but formatting the single output value quickly.\n\n> The optimizations under discussion will not significantly affect comparison\n> speed one way or the other, so comparison speed is a moot issue.\n\nOn what do you base that assertion? I'd expect comparisons to be sped\nup significantly: no need to unpack the storage format, and the inner\nloop handles four digits per iteration instead of one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Jul 1999 09:32:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] regression bigtest needs very long time "
}
] |
[
{
"msg_contents": "I was looking for a function to return the number of tuples an update\nreturned, but couldn't find anything for libpq++. Any ideas?\n\nI also noticed a bug in the docs worth mentioning...\nhttp://postgresql.nextpath.com/doxlist.html\n\n PgDatabase data;\n data.exec(\"create table foo (a int4, b char16, d float8)\");\n data.exec(\"copy foo from stdin\");\n data.putline(\"3\\etHello World\\et4.5\\en\");\n data.putline(\"4\\etGoodbye World\\et7.11\\en\");\n &...\n data.putline(\".\\en\");\n data.endcopy();\n\nThere is no PgDatabase::exec\n\n\nthanks\n-Michael\n\n",
"msg_date": "Wed, 30 Jun 1999 01:57:55 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting number of tuples affected"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Michael Richards wrote:\n\n> I was looking for a function to return the number of tuples an update\n> returned, but couldn't find anything for libpq++. Any ideas?\n\nAdded to my list of stuff to do. As to what else you can do, submit\npatches? :)\n\n> I also noticed a bug in the docs worth mentioning...\n> http://postgresql.nextpath.com/doxlist.html\n> \n> PgDatabase data;\n> data.exec(\"create table foo (a int4, b char16, d float8)\");\n> data.exec(\"copy foo from stdin\");\n> data.putline(\"3\\etHello World\\et4.5\\en\");\n> data.putline(\"4\\etGoodbye World\\et7.11\\en\");\n> &...\n> data.putline(\".\\en\");\n> data.endcopy();\n> \n> There is no PgDatabase::exec\n\nYep. Guess that should be data.Exec(...\n\nThanks!\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 06:27:30 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Getting number of tuples affected"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Vince Vielhaber wrote:\n\n> > I was looking for a function to return the number of tuples an update\n> > returned, but couldn't find anything for libpq++. Any ideas?\n> \n> Added to my list of stuff to do. As to what else you can do, submit\n> patches? :)\n\n> > There is no PgDatabase::exec\n> Yep. Guess that should be data.Exec(...\nNot sure there is an Exec either :)\n\nI've been making a pile of changes to the PgDatabase class. (Actually I\nderived it). One such change is a function to quote a string you're going\nto use in SQL. I'm not sure if this belongs in the PgDatabase class, but\nif you think so, I'll add it before I submit the tuple count patches.\n\nThis allows me to do something like:\nString updatesql(form(\"UPDATE users SET lastrequest='now' WHERE \nloginid=%s\",dbh->quote(_username).chars()));\ncout << dbh->ExecTuplesOk(updatesql);\n\nThat is actually the call I needed the update count to ensure it worked...\n\nHere is the routine:\n\n// routine to quote any \\ or ' chars in the passed string\n// this isn't too efficient, but how much data are we really quoting?\nString TDatabase::quote(const char *dirty) {\n // start with a single quote\n String clean(\"'\");\n\n const char *strptr=dirty;\n // escape the string if it contains any ' or \\ chars\n while (*strptr) {\n if ((*strptr=='\\'') || (*strptr=='\\\\')) \n clean+='\\\\'; \n \n clean+=*(strptr++); \n }\n // end with a quote\n clean+=\"'\";\n\n return clean;\n}\n\n\n-Michael\n\n",
"msg_date": "Wed, 30 Jun 1999 11:28:43 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Getting number of tuples affected"
},
{
"msg_contents": "On Wed, 30 Jun 1999, Michael Richards wrote:\n\n> On Wed, 30 Jun 1999, Vince Vielhaber wrote:\n> \n> > > I was looking for a function to return the number of tuples an update\n> > > returned, but couldn't find anything for libpq++. Any ideas?\n> > \n> > Added to my list of stuff to do. As to what else you can do, submit\n> > patches? :)\n> \n> > > There is no PgDatabase::exec\n> > Yep. Guess that should be data.Exec(...\n> Not sure there is an Exec either :)\n\nIt's inherited.\n\n> \n> I've been making a pile of changes to the PgDatabase class. (Actually I\n> derived it). One such change is a function to quote a string you're going\n> to use in SQL. I'm not sure if this belongs in the PgDatabase class, but\n> if you think so, I'll add it before I submit the tuple count patches.\n\nI don't think it belongs in PgDatabase. I'm not even sure it belongs in\nlibpq++, although I can see its usefulness.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n",
"msg_date": "Wed, 30 Jun 1999 10:41:30 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Getting number of tuples affected"
},
{
"msg_contents": "Hi.\nHere are some diffs that implement a function called TuplesAffected. It\nreturns the number of tuples the last command affected, or 0 if the last\ncommand was a SELECT. I added it to the PgConnection because it contains\nthe Exec method as well as the PQresult structure. Maybe farther down in\nthe inheritance there should be a function that executes a query and\nreturns the number of tuples affected or returned (according to whether it\nwas a select or not) or a -1 on error.\n\nPatches are attached...\n\n-Michael",
"msg_date": "Wed, 30 Jun 1999 14:27:46 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patches to get number of tuples affected"
},
{
"msg_contents": "Thus spake Michael Richards\n> Here are some diffs that implement a function called TuplesAffected. It\n> returns the number of tuples the last command affected, or 0 if the last\n> command was a SELECT. I added it to the PgConnection because it contains\n\nWhy not overload PGTuples() instead (assuming it doesn't already do this)?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 30 Jun 1999 14:15:09 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patches to get number of tuples affected"
},
{
"msg_contents": "On Wed, 30 Jun 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake Michael Richards\n> > Here are some diffs that implement a function called TuplesAffected. It\n> > returns the number of tuples the last command affected, or 0 if the last\n> > command was a SELECT. I added it to the PgConnection because it contains\n> \n> Why not overload PGTuples() instead (assuming it doesn't already do this)?\n\nTuples returned tells you how many you can get using the getvalue series.\nIf you tried that with an update, it core dumps. I think the two are\nreally related, but fundamentally different.\n\n-Michael\n\n",
"msg_date": "Wed, 30 Jun 1999 15:36:00 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Patches to get number of tuples affected"
},
{
"msg_contents": "Thus spake Michael Richards\n> > Why not overload PGTuples() instead (assuming it doesn't already do this)?\n\nAs mentioned in another posting, I meant PQntuples().\n\n> Tuples returned tells you how many you can get using the getvalue series.\n> If you tried that with an update, it core dumps. I think the two are\n> really related, but fundamentally different.\n\nI'm just thinking that it's easy for PQntuples() to tell what it has\nto return and branch accordingly. It just makes it easier to remember\nthe function. No asking which get tuple count function works for update\nand which for select.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 30 Jun 1999 15:20:50 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patches to get number of tuples affected"
}
] |
[
{
"msg_contents": "Hmmm, leaving out the truncate would remove some of the hassle of what\nto do with segmented tables. As long as a new tuple goes into the\nbegining of the last dead area, this should work.\n\nSaying this, we would need some form of truncate, but perhaps this could\nbe done if vacuum is run manually, and not while running automatically?\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: 30 June 1999 13:41\nTo: Bruce Momjian\nCc: Michael Richards; [email protected]\nSubject: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)\n\n\nOn Wed, 30 Jun 1999, Bruce Momjian wrote:\n\n> > Would it be easy to come up with a scheme for the vacuum function\ndefrag a\n> > set number of pages and such, release its locks if there is another\n> > process blocked and waiting, then resume after that process is\nfinished?\n> \n> That is a very nice idea. We could just release and reaquire the\nlock,\n> knowing that if there is someone waiting, they would get the lock. \n> Maybe someone can comment on this?\n\nMy first thought is \"doesn't this still require the 'page-reusing'\nfunctionality to exist\"? Which virtually eliminates the problem...\n\nIf not, then why can't something be done where this is transparent\naltogther? Have some sort of mechanism that keeps track of \"dead\nspace\"...a trigger that says after X tuples have been deleted, do an\nautomatic vacuum of the database?\n\nThe automatic vacuum would be done in a way similar to Michael's\nsuggestion above...scan through for the first 'dead space', lock the\ntable\nfor a short period of time and \"move records up\". How many tuples could\nyou move in a very short period of time, such that it is virtually\ntransparent to end-users?\n\nAs a table gets larger and larger, a few 'dead tuples' aren't going to\nmake much of a different in performance, so make the threshold some\npercentage of the size of the table, so at it grows, the number of 'dead\ntuples' has to be larger...\n\nAnd leave out the truncate at the end...\n\nThe 'manual vacuum' would still need to be run periodically, for the\ntruncate and for stats...\n\nJust a thought...:)\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 30 Jun 1999 15:02:11 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)"
},
{
"msg_contents": "> Hmmm, leaving out the truncate would remove some of the hassle of what\n> to do with segmented tables. As long as a new tuple goes into the\n> begining of the last dead area, this should work.\n\nI think the new code has this multi-segment truncate working. The fix\nwill be in 6.5.1.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 Jun 1999 23:44:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vaccum (Was: Re: [HACKERS] Hot Backup Ability)"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.