threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Will the primary, foreign and check constraints be fully implemented in\nthe 6.4 release. As of 6.3.2 only primary keys in the table create are\nimplemented. The alter table statement lacks these.\n\nIs there someone that is implementing this and when.\n\nCheers\nDavid\n",
"msg_date": "Fri, 10 Jul 1998 08:18:59 +0200",
"msg_from": "David Maclean <[email protected]>",
"msg_from_op": true,
"msg_subject": "Constraints in PostgreSQL"
},
{
"msg_contents": "David Maclean wrote:\n> \n> Will the primary, foreign and check constraints be fully implemented in\n> the 6.4 release. As of 6.3.2 only primary keys in the table create are\n> implemented. The alter table statement lacks these.\n> \n> Is there someone that is implementing this and when.\n\nI have some hope that low-level locking (LLL) could be implemented\nin 6.4 and so FOREIGN should be postponed...\nI'll post more words about LLL & 6.4 in the next week -:)\n\nAs for PRIMARY & CHECK in ALTER TABLE - this would be nice\nto have them but no one does it, AFAIK.\n\nVadim\n",
"msg_date": "Fri, 10 Jul 1998 15:32:24 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Constraints in PostgreSQL"
}
] |
[
{
"msg_contents": "Hi folks,\nI have written a simple db/dbm-emulation library for pgsql/libpq.\nIt would be good if people could try it out and send me some\nfeedback.\n\nAvailable for download (small archive, only 2.5 kB) at:\nhttp://www.is.kiruna.se/~goran/ldap/arkiv/\n\n\tEnjoy,\n\t\tG�ran.\n",
"msg_date": "Sun, 12 Jul 1998 13:34:03 +0000",
"msg_from": "Goran Thyni <[email protected]>",
"msg_from_op": true,
"msg_subject": "db/dbm-emulation"
}
] |
[
{
"msg_contents": "I was wrong.\n\natttypmod was passed as int16 to the clients. attypmod is now passed as\nint32. I have modified libpq to fix this. I think only odbc needs to\nbe changed for this. I know odbc is not maintained here, but is\nuploaded from somewhere else. The maintainer needs to change this. The\nother interfaces look OK.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 12 Jul 1998 22:51:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "atttypmod now 32 bits, interface change"
},
{
"msg_contents": "Not that we have been sitting on our hands, but we have been waiting for the\nFE/BE protocol to stabilize before updating the ODBC driver to the 6.4\nspecs. Have we reached this point?\n\nBruce Momjian wrote:\n\n> I was wrong.\n>\n> atttypmod was passed as int16 to the clients. attypmod is now passed as\n> int32. I have modified libpq to fix this. I think only odbc needs to\n> be changed for this. I know odbc is not maintained here, but is\n> uploaded from somewhere else. The maintainer needs to change this. The\n> other interfaces look OK.\n>\n> --\n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n\n\n\n",
"msg_date": "Mon, 13 Jul 1998 08:42:55 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "David Hartwig <[email protected]> writes:\n> Not that we have been sitting on our hands, but we have been waiting for the\n> FE/BE protocol to stabilize before updating the ODBC driver to the 6.4\n> specs. Have we reached this point?\n\nThe cancel changeover and this atttypmod width business were the only\nopen issues I know about. I'm prepared to declare the protocol frozen\nfor 6.4 ... are there any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 09:41:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change "
},
{
"msg_contents": "> Not that we have been sitting on our hands, but we have been waiting for the\n> FE/BE protocol to stabilize before updating the ODBC driver to the 6.4\n> specs. Have we reached this point?\n\nGood point. I totally agree.\n\nI think we have finally stabalized the protocol, with the CANCEL\ncompleted last week by Tom Lane. As far as I know, the libpq and sgml\ndocs are updated, so you can use them to see the changes. If you need\ndetails, I have kept some of Tom Lane's postings.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 09:58:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> Not that we have been sitting on our hands, but we have been waiting for the\n> FE/BE protocol to stabilize before updating the ODBC driver to the 6.4\n> specs. Have we reached this point?\n\nOf course, beta does not start until Sep 1, so it is possible to wait\nsome more to see of other things change before updating things, but\ncurrently, there are no open items I know about.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 09:59:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> David Hartwig <[email protected]> writes:\n> > Not that we have been sitting on our hands, but we have been waiting for the\n> > FE/BE protocol to stabilize before updating the ODBC driver to the 6.4\n> > specs. Have we reached this point?\n> \n> The cancel changeover and this atttypmod width business were the only\n> open issues I know about. I'm prepared to declare the protocol frozen\n> for 6.4 ... are there any objections?\n\nI agree. We are done.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 10:46:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> The cancel changeover and this atttypmod width business were the only\n> open issues I know about. I'm prepared to declare the protocol frozen\n> for 6.4 ... are there any objections?\n\nSounds good. Should we ask Tatsuo to do some mixed-endian tests, or is\nthat area completely unchanged from v6.3?\n\n - Tom\n",
"msg_date": "Mon, 13 Jul 1998 14:54:26 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> > The cancel changeover and this atttypmod width business were the only\n> > open issues I know about. I'm prepared to declare the protocol frozen\n> > for 6.4 ... are there any objections?\n> \n> Sounds good. Should we ask Tatsuo to do some mixed-endian tests, or is\n> that area completely unchanged from v6.3?\n> \n> - Tom\n> \n\nUnchanged, I think.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 11:15:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> Should we ask Tatsuo to do some mixed-endian tests, or is\n> that area completely unchanged from v6.3?\n\nI don't think I broke anything in that regard ... but more testing is\nalways a good thing. If Tatsuo-san can spare the time, it would be\nappreciated.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 11:32:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change "
},
{
"msg_contents": "At 11:32 AM 98.7.13 -0400, Tom Lane wrote:\n>\"Thomas G. Lockhart\" <[email protected]> writes:\n>> Should we ask Tatsuo to do some mixed-endian tests, or is\n>> that area completely unchanged from v6.3?\n>\n>I don't think I broke anything in that regard ... but more testing is\n>always a good thing. If Tatsuo-san can spare the time, it would be\n>appreciated.\n\nOk, I think I can start the testing next week.\nThis week I'm too busy because I have to finish writing an article\non PostgreSQL for a Japanese magazine!\nBy the way what are the visible changes of 6.4?\nI know now we can cancel a query. Could you tell me any other\nthing so that I could refer to them in the article?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Tue, 14 Jul 1998 22:01:11 +0900",
"msg_from": "[email protected] (Tatsuo Ishii)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "At 11:32 AM 98.7.13 -0400, Tom Lane wrote:\n>\"Thomas G. Lockhart\" <[email protected]> writes:\n>> Should we ask Tatsuo to do some mixed-endian tests, or is\n>> that area completely unchanged from v6.3?\n>\n>I don't think I broke anything in that regard ... but more testing is\n>always a good thing. If Tatsuo-san can spare the time, it would be\n>appreciated.\n\n>Ok, I think I can start the testing next week.\n>This week I'm too busy because I have to finish writing an article\n>on PostgreSQL for a Japanese magazine!\n>By the way what are the visible changes of 6.4?\n>I know now we can cancel a query. Could you tell me any other\n>thing so that I could refer to them in the article?\n\n---------------------------------------------------------------------------\n\nNothing big that I can think of. Lots of cleanup/improvements to\nexisting areas. Vadim has some big items (as usual), but I don't think\nwe want to mention them publically yet.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 09:42:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> >This week I'm too busy because I have to finish writing an article\n> >on PostgreSQL for a Japanese magazine!\n> >By the way what are the visible changes of 6.4?\n> >I know now we can cancel a query. Could you tell me any other\n> >thing so that I could refer to them in the article?\n> Nothing big that I can think of. Lots of cleanup/improvements to\n> existing areas.\n\nNow Bruce! The automatic type coersion features are a pretty big change,\nespecially for the casual user; the columns in queries get matched up\nand converted without any explicit work from the user. I can give Tatsuo\nsome examples if he would like. I'll bet there are a few other changes\nwhich would give readers a good idea about the ongoing support and\nimprovements to Postgres...\n\nSpeaking of docs, we'll have SQL and utility commands in an\nhtml/hardcopy reference manual. Hmm, may not be as exciting for Japanese\nreaders, but... :)\n\nI've been updating the old release notes in the sgml sources, and have\nthat completed. Perhaps we can start the v6.4 release notes now? With\nthe sgml sources we can have more summary verbiage to help users get\nintroduced to new features, and then roll it out into a text file if\nnecessary.\n\n - Tom\n",
"msg_date": "Tue, 14 Jul 1998 15:09:29 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> > >This week I'm too busy because I have to finish writing an article\n> > >on PostgreSQL for a Japanese magazine!\n> > >By the way what are the visible changes of 6.4?\n> > >I know now we can cancel a query. Could you tell me any other\n> > >thing so that I could refer to them in the article?\n> > Nothing big that I can think of. Lots of cleanup/improvements to\n> > existing areas.\n> \n> Now Bruce! The automatic type coersion features are a pretty big change,\n> especially for the casual user; the columns in queries get matched up\n> and converted without any explicit work from the user. I can give Tatsuo\n> some examples if he would like. I'll bet there are a few other changes\n> which would give readers a good idea about the ongoing support and\n> improvements to Postgres...\n> \n> Speaking of docs, we'll have SQL and utility commands in an\n> html/hardcopy reference manual. Hmm, may not be as exciting for Japanese\n> readers, but... :)\n> \n> I've been updating the old release notes in the sgml sources, and have\n> that completed. Perhaps we can start the v6.4 release notes now? With\n> the sgml sources we can have more summary verbiage to help users get\n> introduced to new features, and then roll it out into a text file if\n> necessary.\n\nI was afraid I was going to insult someone by saying what I did. \n\nI MEANT that there are no features being added that a non-postgresql\nuser would be interested in. subselects was one feature that\nnon-postgresql users would understand. Most of our stuff now is\ncleanup/extension of 6.3 features, many of which would be uninteresting\nto potential users.\n\nI suggest we focus on telling them about 6.3, which is ready NOW, and\nhas many nice features.\n\nIn fact, since we started two years ago, every release has gotten much\nbetter than the previous, so we are now at a point where we are adding\n'nifty' features like 'cancel' and atttypmod and stuff like that.\n\nThe days where every release fixed server crashes, or added a feature\nthat users were 'screaming for' may be a thing of the past. We are\nnearing a maturity stage, where we can focus on performance,\ndocumenation, features, and cleanup. The days when we have a 'major'\nfeature may be fewer, because we have added 'most' of the major features\npeople have been asking for.\n\nOur user base is growing, and the number of sophisticated developers is\ngrowing too, so we are getting major patches to improve lots of existing\nfunctionality.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 11:37:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change]"
},
{
"msg_contents": "At 18:37 +0300 on 14/7/98, Bruce Momjian wrote:\n\n\n> The days where every release fixed server crashes, or added a feature\n> that users were 'screaming for' may be a thing of the past. We are\n> nearing a maturity stage, where we can focus on performance,\n> documenation, features, and cleanup. The days when we have a 'major'\n> feature may be fewer, because we have added 'most' of the major features\n> people have been asking for.\n\nExcept row-level locking, referential integrity and PL/SQL...\n\nJust an example of major features yet to be implemented (speaking from the\npoint of view of a user who doesn't know what the plans are for 6.4, of\ncourse).\n\nHerouth\n\n(PS. This thread doesn't really have anything to do with the interfaces\nlist, does it? I redirected the crosspost to \"general\".)\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n",
"msg_date": "Wed, 15 Jul 1998 13:04:26 +0300",
"msg_from": "Herouth Maoz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES][HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "> At 18:37 +0300 on 14/7/98, Bruce Momjian wrote:\n> \n> \n> > The days where every release fixed server crashes, or added a feature\n> > that users were 'screaming for' may be a thing of the past. We are\n> > nearing a maturity stage, where we can focus on performance,\n> > documenation, features, and cleanup. The days when we have a 'major'\n> > feature may be fewer, because we have added 'most' of the major features\n> > people have been asking for.\n> \n> Except row-level locking, referential integrity and PL/SQL...\n\nI said the days would be fewer, not gone.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 10:42:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES][HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> I was afraid I was going to insult someone by saying what I did.\n> \n> I MEANT that there are no features being added that a non-postgresql\n> user would be interested in. subselects was one feature that\n> non-postgresql users would understand. Most of our stuff now is\n> cleanup/extension of 6.3 features, many of which would be uninteresting\n> to potential users.\n\nNot requiring the column to sort on in target list ia also quite\nimportant.\n\nAs are the (still elementary) constraints, still elementary becuse \nthere is no way to use functions or \"is null\" in check constraint, \nand constraints can be used only when defining tables, not in \n\"alter table\" construct.\n \n> The days where every release fixed server crashes, or added a feature\n> that users were 'screaming for' may be a thing of the past.\n\nIs anyone working on fixing the exploding optimisations for many OR-s,\nat least the canonic case used by access?\n\nMy impression is that this has fallen somewhere between \ninsightdist and Vadim.\n\n> We are nearing a maturity stage, where we can focus on performance,\n> documenation, features, and cleanup. The days when we have a 'major'\n> feature may be fewer, because we have added 'most' of the major features\n> people have been asking for.\n\nExpect them asking more soon ;) \n\nI'm sure that soon being just basic ANSI SQL compliant is not enough; \npeople will want (in no particular order ;):\n * ANSI CLI,\n * updatable cursors,\n * foreign key constraints, \n * distributed databases,\n * row level locking,\n * better inheritance,\n * domains, \n * isolation levels,\n * PL/SQL,\n * better optimisation for special cases, \n * temporary tables (both global and session level),\n * more SQL3 constructs,\n * unlisten command, maybe an argument to listen command,\n * better support for installing your own access methods,\n * separating the methods typinput/typoutput (native binary)\n and typreceive/typsend (wire binary), they are in pg_type\n * implementing a new fe/be protocol that is easier to implement \n (does not mix zero terminated, and count-prefixed chunks),\n preferrably modelled after X-Window protocol.\n * getting rid of the 8k limitations, both in fe/be protocol and\n in disk storage.\n\nI know that some of these things are being worked on, but I've lost \ntrack which are expected for 6.4, which for 6.5 and which I should \nnot expect before 8.0 ;)\n\n> Our user base is growing, and the number of sophisticated developers is\n> growing too, so we are getting major patches to improve lots of existing\n> functionality.\n\nYep. Great future is awaiting PostgreSQL.\n\nI'm really looking forward to a time when I can find some time to \ncontribute some actual code.\n\nHannu\n",
"msg_date": "Wed, 15 Jul 1998 22:52:25 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > I was afraid I was going to insult someone by saying what I did.\n> > \n> > I MEANT that there are no features being added that a non-postgresql\n> > user would be interested in. subselects was one feature that\n> > non-postgresql users would understand. Most of our stuff now is\n> > cleanup/extension of 6.3 features, many of which would be uninteresting\n> > to potential users.\n> \n> Not requiring the column to sort on in target list ia also quite\n> important.\n> \n> As are the (still elementary) constraints, still elementary becuse \n> there is no way to use functions or \"is null\" in check constraint, \n> and constraints can be used only when defining tables, not in \n> \"alter table\" construct.\n> \n> > The days where every release fixed server crashes, or added a feature\n> > that users were 'screaming for' may be a thing of the past.\n> \n> Is anyone working on fixing the exploding optimisations for many OR-s,\n> at least the canonic case used by access?\n> \n> My impression is that this has fallen somewhere between \n> insightdist and Vadim.\n> \n> > We are nearing a maturity stage, where we can focus on performance,\n> > documenation, features, and cleanup. The days when we have a 'major'\n> > feature may be fewer, because we have added 'most' of the major features\n> > people have been asking for.\n> \n> Expect them asking more soon ;) \n> \n> I'm sure that soon being just basic ANSI SQL compliant is not enough; \n> people will want (in no particular order ;):\n> * ANSI CLI,\n> * updatable cursors,\n> * foreign key constraints, \n> * distributed databases,\n> * row level locking,\n> * better inheritance,\n> * domains, \n> * isolation levels,\n> * PL/SQL,\n> * better optimisation for special cases, \n> * temporary tables (both global and session level),\n> * more SQL3 constructs,\n> * unlisten command, maybe an argument to listen command,\n> * better support for installing your own access methods,\n> * separating the methods typinput/typoutput (native binary)\n> and typreceive/typsend (wire binary), they are in pg_type\n> * implementing a new fe/be protocol that is easier to implement \n> (does not mix zero terminated, and count-prefixed chunks),\n> preferrably modelled after X-Window protocol.\n> * getting rid of the 8k limitations, both in fe/be protocol and\n> in disk storage.\n> \n> I know that some of these things are being worked on, but I've lost \n> track which are expected for 6.4, which for 6.5 and which I should \n> not expect before 8.0 ;)\n> \n> > Our user base is growing, and the number of sophisticated developers is\n> > growing too, so we are getting major patches to improve lots of existing\n> > functionality.\n> \n> Yep. Great future is awaiting PostgreSQL.\n> \n> I'm really looking forward to a time when I can find some time to \n> contribute some actual code.\n> \n> Hannu\n> \n\nHard to argue with this. There are more MAJOR things that I had\nforgotten.\n\nStill, I will say that the things we are working on now are more\n'extras', than the stuff we were working on a year ago, which were\n'usablility' issues.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 16:23:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Not requiring the column to sort on in target list ia also quite\n> important.\n\nI'm not sure but isn't this already in 6.4-current ?\n\n> \n> As are the (still elementary) constraints, still elementary becuse\n> there is no way to use functions or \"is null\" in check constraint,\n\nispas=> create table t (x int, check (x is null or x = 5));\nCREATE\nispas=> insert into t values (1);\nERROR: ExecAppend: rejected due to CHECK constraint $1\nispas=> insert into t values (null);\nINSERT 168212 1\nispas=> insert into t values (5);\nINSERT 168213 1\n\nAnd I'm sure that functions are supported too. This is 6.3.2\n\n> and constraints can be used only when defining tables, not in\n> \"alter table\" construct.\n\nI hadn't time to do this when implementing and have no plans\nto do this. In \"near\" future :)\n\n> \n> > The days where every release fixed server crashes, or added a feature\n> > that users were 'screaming for' may be a thing of the past.\n> \n> Is anyone working on fixing the exploding optimisations for many OR-s,\n> at least the canonic case used by access?\n> \n> My impression is that this has fallen somewhere between\n> insightdist and Vadim.\n\nI'm not working...\n\nVadim\n",
"msg_date": "Thu, 16 Jul 1998 04:25:44 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> > > The days where every release fixed server crashes, or added a feature\n> > > that users were 'screaming for' may be a thing of the past.\n> > \n> > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > at least the canonic case used by access?\n> > \n> > My impression is that this has fallen somewhere between\n> > insightdist and Vadim.\n> \n> I'm not working...\n> \n\nNot sure anyone has an idea how to fix this.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 16:39:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "\n\nHannu Krosing wrote:\n\n> Bruce Momjian wrote:\n> >\n> >\n> > I was afraid I was going to insult someone by saying what I did.\n> >\n> > I MEANT that there are no features being added that a non-postgresql\n> > user would be interested in. subselects was one feature that\n> > non-postgresql users would understand. Most of our stuff now is\n> > cleanup/extension of 6.3 features, many of which would be uninteresting\n> > to potential users.\n>\n> Not requiring the column to sort on in target list ia also quite\n> important.\n>\n\nAlong these lines - I heard someone grumbling a while back about not being\nable to use a function in the ORDER/GROUP BY clauses. (i.e. SELECT bar FROM\nfoo ORDER BY LCASE(alpha);) I believe it is on the TODO list. Bruce, I\nwill claim this item unless someone else already has.\n\n",
"msg_date": "Wed, 15 Jul 1998 16:40:56 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> Along these lines - I heard someone grumbling a while back about not being\n> able to use a function in the ORDER/GROUP BY clauses. (i.e. SELECT bar FROM\n> foo ORDER BY LCASE(alpha);) I believe it is on the TODO list. Bruce, I\n> will claim this item unless someone else already has.\n> \n> \n\nDone.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 17:02:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "\nHannu Krosing wrote:\n\n> > The days where every release fixed server crashes, or added a feature\n> > that users were 'screaming for' may be a thing of the past.\n>\n> Is anyone working on fixing the exploding optimisations for many OR-s,\n> at least the canonic case used by access?\n>\n> My impression is that this has fallen somewhere between\n> insightdist and Vadim.\n\nThis is really big for the ODBCers. (And I suspect for JDBCers too.) Many\ndesktop libraries and end-user tools depend on this \"record set\" strategy to\noperate effectively.\n\nI have put together a workable hack that runs just before cnfify(). The\noption is activated through the SET command. Once activated, it identifies\nqueries with this particular multi-OR pattern generated by these RECORD SET\nstrategies. Qualified query trees are rewritten as multiple UNIONs. (One\nfor each OR grouping).\n\nThe results are profound. Queries that used to scan tables because of the\nORs, now make use of any indexes. Thus, the size of the table has virtually\nno effect on performance. Furthermore, queries that used to crash the\nbackend, now run in under a second.\n\nCurrently the down sides are:\n 1. If there is no usable index, performance is significantly worse. The\npatch does not check to make sure that there is a usable index. I could use\nsome pointers on this.\n\n 2. Small tables are actually a bit slower than without the patch.\n\n 3. Not very elegant. I am looking for a more generalized solution.\nI have lots of ideas, but I would need to know the backend much better before\nattempting any of them. My favorite idea is before cnfify(), to factor the\nOR terms and pull out the constants into a virtual (temporary) table spaces.\nThen rewrite the query as a join. The optimizer will (should) treat the new\nquery accordingly. This assumes that an efficient factoring algorithm exists\nand that temporary tables can exist in the heap.\n\nIllustration:\nSELECT ... FROM tab WHERE\n(var1 = const1 AND var2 = const2) OR\n(var1 = const3 AND var2 = const4) OR\n(var1 = const5 AND var2 = const6)\n\nSELECT ... FROM tab, tmp WHERE\n(var1 = var_x AND var2 = var_y)\n\ntmp\nvar_x | var_y\n--------------\nconst1|const2\nconst3|const4\nconst5|const6\n\nComments?\n\n",
"msg_date": "Wed, 15 Jul 1998 18:16:02 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> The results are profound. Queries that used to scan tables because of the\n> ORs, now make use of any indexes. Thus, the size of the table has virtually\n> no effect on performance. Furthermore, queries that used to crash the\n> backend, now run in under a second.\n> \n> Currently the down sides are:\n> 1. If there is no usable index, performance is significantly worse. The\n> patch does not check to make sure that there is a usable index. I could use\n> some pointers on this.\n> \n> 2. Small tables are actually a bit slower than without the patch.\n> \n> 3. Not very elegant. I am looking for a more generalized solution.\n> I have lots of ideas, but I would need to know the backend much better before\n> attempting any of them. My favorite idea is before cnfify(), to factor the\n> OR terms and pull out the constants into a virtual (temporary) table spaces.\n> Then rewrite the query as a join. The optimizer will (should) treat the new\n> query accordingly. This assumes that an efficient factoring algorithm exists\n> and that temporary tables can exist in the heap.\n\nOK, I have an idea. Just today, we allow:\n\n\tselect *\n\tfrom tab1\n\twhere val in (\n\t\tselect x from tab2\n\t\tunion\n\t\tselect y from tab3\n\t)\n\nHow about if instead of doing:\n\n\tselect * from tab1 where val = 3\n\tunion\n\tselect * from tab1 where val = 4\n\t...\n\nyou change it to:\n\t\n\tselect * from tab1 where val in (\n\t\tselect 3\n\t\tunion\n\t\tselect 4\n\t)\n\nThis may be a big win. You aren't running the same query over and over\nagain, with the same joins, and just a different constant.\n\nLet me know.\n\nIf it fails for some reason, it is possible my subselect union code has\na bug, so let me know.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 19:39:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> > The results are profound. Queries that used to scan tables because of the\n> OK, I have an idea. Just today, we allow:\n> \n> \tselect *\n> \tfrom tab1\n> \twhere val in (\n> \t\tselect x from tab2\n> \t\tunion\n> \t\tselect y from tab3\n> \t)\n> \n> How about if instead of doing:\n> \n> \tselect * from tab1 where val = 3\n> \tunion\n> \tselect * from tab1 where val = 4\n> \t...\n> \n> you change it to:\n> \t\n> \tselect * from tab1 where val in (\n> \t\tselect 3\n> \t\tunion\n> \t\tselect 4\n> \t)\n\nOK, I just ran some test, and it does not look good:\n\n---------------------------------------------------------------------------\n\nson_db=> explain select mmatter from matter where mmatter = 'A01-001';\nNOTICE: QUERY PLAN:\n\nIndex Scan using i_matt2 on matter (cost=2.05 size=1 width=12)\n\nEXPLAIN\n\nson_db=> explain select mmatter from matter where mmatter in (select 'A01-001');\nNOTICE: QUERY PLAN:\n\nSeq Scan on matter (cost=512.20 size=1001 width=12)\n SubPlan\n -> Result (cost=0.00 size=0 width=0)\n\nEXPLAIN\n\n---------------------------------------------------------------------------\n\nTurns out indexes are not used in outer queries of subselects. Not sure\nwhy. Vadim?\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 21:22:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "On Wed, 15 Jul 1998, Bruce Momjian wrote:\n\n> > > > The days where every release fixed server crashes, or added a feature\n> > > > that users were 'screaming for' may be a thing of the past.\n> > > \n> > > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > > at least the canonic case used by access?\n> > > \n> > > My impression is that this has fallen somewhere between\n> > > insightdist and Vadim.\n> > \n> > I'm not working...\n> > \n> \n> Not sure anyone has an idea how to fix this.\n\nWhat? How to get Vadim back to work? ;)\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 16 Jul 1998 09:45:18 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Not requiring the column to sort on in target list ia also quite\n> > important.\n> \n> I'm not sure but isn't this already in 6.4-current ?\n> \n> >\n> > As are the (still elementary) constraints, still elementary becuse\n> > there is no way to use functions or \"is null\" in check constraint,\n> \n> ispas=> create table t (x int, check (x is null or x = 5));\n> CREATE\n> ispas=> insert into t values (1);\n> ERROR: ExecAppend: rejected due to CHECK constraint $1\n> ispas=> insert into t values (null);\n> INSERT 168212 1\n> ispas=> insert into t values (5);\n> INSERT 168213 1\n> \n> And I'm sure that functions are supported too. This is 6.3.2\n\nSorry, i tried the wrong syntax (without IS ) ;(\n\nbut functions still dont work:\n\nhannu=> create table test1 (a text, b text,\nhannu-> check (trim(a) <> '' or trim(b) <> ''));\nERROR: parser: parse error at or near \"trim\"\n\nIf I use a non-existing function, I get a different answer\n\nhannu=> create table test1 (a text, b text,\nhannu-> check (strip(a) <> '' or strip(b) <> ''));\nERROR: function strip(text) does not exist\n\nSo it cant't be just \"parser\" error\n\n> > and constraints can be used only when defining tables, not in\n> > \"alter table\" construct.\n> \n> I hadn't time to do this when implementing and have no plans\n> to do this. In \"near\" future :)\n> \n> >\n> > > The days where every release fixed server crashes, or added a feature\n> > > that users were 'screaming for' may be a thing of the past.\n> >\n> > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > at least the canonic case used by access?\n> >\n> > My impression is that this has fallen somewhere between\n> > insightdist and Vadim.\n> \n> I'm not working...\n\nAre you after some general solution, or are you first implementing \nthe 'rewrite to union' way ?\n\nHannu\n",
"msg_date": "Thu, 16 Jul 1998 11:30:11 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > The results are profound. Queries that used to scan tables because of the\n> \n> How about if instead of doing:\n> \n> select * from tab1 where val = 3\n> union\n> select * from tab1 where val = 4\n> ...\n> \n> you change it to:\n> \n> select * from tab1 where val in (\n> select 3\n> union\n> select 4\n> )\n> \n\nthe explosion happens for ORs of multiple ANDs that get rewritten to:\n\nselect * from tabl wehere val1=1 and val2=1 and val3=1\nunion\nselect * from tabl wehere val1=1 and val2=1 and val3=2\nunion\n...\n\n\nAnd there is no way of doing (at least presently):\n\nselect * from table where (val1,val2,val3) in (select 1,1,1 union select\n1,1,2);\n\nHannu\n",
"msg_date": "Thu, 16 Jul 1998 11:41:47 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> but functions still dont work:\n> \n> hannu=> create table test1 (a text, b text,\n> hannu-> check (trim(a) <> '' or trim(b) <> ''));\n> ERROR: parser: parse error at or near \"trim\"\n\nTRIM is keyword, not a function...\nWe have to copy some lines in gram.y\n\nReal functions are working...\n\nVadim\n",
"msg_date": "Thu, 16 Jul 1998 16:51:35 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "hi, guys!\n\n\nIt seems to me that two or three weeks ago there were some messages about \nporting libpq for Win32 platform. I think it is very imporant feature and\nit should be mentioned with no doubts in all reviews about PostgreSQL \n'cause it moved PostgreSQL far beyond any other free DB engeens in the \nworld of Windowz\n\nAl.\n\n",
"msg_date": "Thu, 16 Jul 1998 13:35:30 +0300 (IDT)",
"msg_from": "Aleksey Dashevsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > but functions still dont work:\n> >\n> > hannu=> create table test1 (a text, b text,\n> > hannu-> check (trim(a) <> '' or trim(b) <> ''));\n> > ERROR: parser: parse error at or near \"trim\"\n> \n> TRIM is keyword, not a function...\n> We have to copy some lines in gram.y\n\nWow! is this standard ?\n\nI found the function trim by doing 'select oprname from pg_oper'\nand tested it as follows:\n\nhannu=> select trim(' x ');\nbtrim\n-----\nx \n(1 row)\n\nwhy is the column called btrim ? \nsome rewrite magic in parser ?\n\nIf it must stay a keyword, then perhaps we should remove the proc ?\n\n> Real functions are working...\n\nyep! Thanks:\n\ncreate table test2(a text,b text, check (btrim(a) <> '' or btrim(b) <>\n''));\n\ndoes work ;)\n\nHannu\n",
"msg_date": "Thu, 16 Jul 1998 13:56:40 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> > TRIM is keyword, not a function...\n> > We have to copy some lines in gram.y\n> Wow! is this standard ?\n> I found the function trim by doing 'select oprname from pg_oper'\n> and tested it as follows:\n> \n> hannu=> select trim(' x ');\n> btrim\n> -----\n> x\n> (1 row)\n> why is the column called btrim ?\n> some rewrite magic in parser ?\n> If it must stay a keyword, then perhaps we should remove the proc ?\n\nUh, yes, I think you are right. Here's why:\n\nThe SQL92 syntax for the trim() function is as follows:\n\nTRIM([LEADING|TRAILING|BOTH] [char FROM] string)\n\nThis syntax is _not_ the clean \"function(arg1,arg2,...)\" syntax that the\nparser could handle without change, so I had to make TRIM a keyword in\nthe parser and explicitly decode the possible argument phrases.\n\nTo implement all possibilities, I transform the function in the parser\nto the functions btrim(), rtrim(), and ltrim() implemented earlier by\nEdmund Mergl as the \"Oracle compatibility functions\".\n\nI'll add TRIM() and the other goofy pseudo-functions to the CHECK\nsyntax, and take the trim(arg1) declaration out of pg_proc since it can\nnever get executed. \n\nOh, btw we allow trimming strings from strings, not just trimming chars\nfrom strings :)\n\n - Tom\n",
"msg_date": "Thu, 16 Jul 1998 13:36:01 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> And there is no way of doing (at least presently):\n> \n> select * from table where (val1,val2,val3)\n> in (select 1,1,1 union select 1,1,2);\n\nI'll look at that...\n\n - Tom\n",
"msg_date": "Thu, 16 Jul 1998 13:49:27 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> hi, guys!\n> \n> \n> It seems to me that two or three weeks ago there were some messages about \n> porting libpq for Win32 platform. I think it is very imporant feature and\n> it should be mentioned with no doubts in all reviews about PostgreSQL \n> 'cause it moved PostgreSQL far beyond any other free DB engeens in the \n\nAlready done in the current snapshot on ftp.postgresql.org.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 16 Jul 1998 11:33:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "On Thu, 16 Jul 1998, Aleksey Dashevsky wrote:\n\n> It seems to me that two or three weeks ago there were some messages about \n> porting libpq for Win32 platform. I think it is very imporant feature and\n> it should be mentioned with no doubts in all reviews about PostgreSQL \n> 'cause it moved PostgreSQL far beyond any other free DB engeens in the \n> world of Windowz\n\nI'd thought that the ODBC driver would have more of an impact with Win32\nthan porting libpq, especially with existing applications.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Thu, 16 Jul 1998 18:27:03 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "On Wed, 15 Jul 1998, David Hartwig wrote:\n\n> This is really big for the ODBCers. (And I suspect for JDBCers too.) Many\n> desktop libraries and end-user tools depend on this \"record set\" strategy to\n> operate effectively.\n\nAlthough I haven't seen what they produce, it is possible that JBuilder\nand others do have this affect with JDBC.\n\nHowever, not all JDBC applications have this problem. Infact the majority\nI've seen only produce much simpler queries.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Thu, 16 Jul 1998 18:30:25 +0100 (BST)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> > > hannu-> check (trim(a) <> '' or trim(b) <> ''));\n> > > ERROR: parser: parse error at or near \"trim\"\n> > \n> > TRIM is keyword, not a function...\n> > We have to copy some lines in gram.y\n\nI think that having trim as a keyword is a problem. The primary virtue of\npostgres is that everything is either a function or a type and as such is\ndefinable by the user and resolved at runtime. Making a keyword out of a\nfunction spoils that capability.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Thu, 16 Jul 1998 11:29:26 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> > > > hannu-> check (trim(a) <> '' or trim(b) <> ''));\n> > > > ERROR: parser: parse error at or near \"trim\"\n> > > \n> > > TRIM is keyword, not a function...\n> > > We have to copy some lines in gram.y\n> \n> I think that having trim as a keyword is a problem. The primary virtue of\n> postgres is that everything is either a function or a type and as such is\n> definable by the user and resolved at runtime. Making a keyword out of a\n> function spoils that capability.\n\nProblem was that SQL standard syntax (or Oracle) did not allow it to be\na function.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 16 Jul 1998 14:37:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Thomas G. Lockhart wrote:\n> \n> > And there is no way of doing (at least presently):\n> >\n> > select * from table where (val1,val2,val3)\n> > in (select 1,1,1 union select 1,1,2);\n> \n> I'll look at that...\n\nCould it be a good idea to have the syntax (at least for constants),\nchanged to (or at least allowed ;) to the following:\n\nselect * from table\n where (val1,val2,val3)\n in ( (1,1,3), (1,1,2), (1,1,1) );\n\nWhich brings us to another issue: \n\nShould (val1,val2,val3) be just some construct that gets rewritten to \n\"something else\" in parser, or should it create an instance of \nanonymus row type ?\n\nAllowing anonymus row type creation on the fly would allow us many nice \nthings, for example a way to create new types of aggregate functions,\nlike \nFOR_MAX((price,date)), so that we could do the following in only one\npass\n\nSELECT\n FOR_MAX((price,sales_datetime)) as last_price, \n MAX(sales_datetime) as last_sale,\n WEEK(sales_datetime) week_nr\nGROUP BY\n week_nr\n;\n\nThis would get the prices and dates of each weeks last sale, and is \nmuch hairier to do using just standard sql.\n\nHannu\n",
"msg_date": "Fri, 17 Jul 1998 10:18:03 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": ">At 11:32 AM 98.7.13 -0400, Tom Lane wrote:\n>>\"Thomas G. Lockhart\" <[email protected]> writes:\n>>> Should we ask Tatsuo to do some mixed-endian tests, or is\n>>> that area completely unchanged from v6.3?\n>>\n>>I don't think I broke anything in that regard ... but more testing is\n>>always a good thing. If Tatsuo-san can spare the time, it would be\n>>appreciated.\n>\n>Ok, I think I can start the testing next week.\n\nI did some cross-platform testing today against 7/18 snapshot. The\nplatforms tested are:\n\n1. Sparc/Solaris 2.6\n2. PowerPC/Linux\n3. x86/FreeBSD\n4. x86/Linux\n\nThey workd fine!\n\nP.S.\nI noticed that 6.4 client did not talk to 6.3.2 server.\n\n[srapc451.sra.co.jp]t-ishii{157}\n\n\n\nConnection to database 'test' failed.\nUnsupported frontend protocol.[srapc451.sra.co.jp]t-ishii{158} \n\nI thought that we have kept the \"backward compatibility\" since we\nintroduced \"protocol version\" in libpq?\n\n",
"msg_date": "Wed, 22 Jul 1998 14:22:31 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change "
},
{
"msg_contents": "> >At 11:32 AM 98.7.13 -0400, Tom Lane wrote:\n> >>\"Thomas G. Lockhart\" <[email protected]> writes:\n> >>> Should we ask Tatsuo to do some mixed-endian tests, or is\n> >>> that area completely unchanged from v6.3?\n> >>\n> >>I don't think I broke anything in that regard ... but more testing is\n> >>always a good thing. If Tatsuo-san can spare the time, it would be\n> >>appreciated.\n> >\n> >Ok, I think I can start the testing next week.\n> \n> I did some cross-platform testing today against 7/18 snapshot. The\n> platforms tested are:\n> \n> 1. Sparc/Solaris 2.6\n> 2. PowerPC/Linux\n> 3. x86/FreeBSD\n> 4. x86/Linux\n> \n> They workd fine!\n> \n> P.S.\n> I noticed that 6.4 client did not talk to 6.3.2 server.\n> \n> [srapc451.sra.co.jp]t-ishii{157}\n> \n> \n> \n> Connection to database 'test' failed.\n> Unsupported frontend protocol.[srapc451.sra.co.jp]t-ishii{158} \n> \n> I thought that we have kept the \"backward compatibility\" since we\n> introduced \"protocol version\" in libpq?\n\nMight be my atttypmod changes. I did not make those version-sensitive. \nI will do that now.\n\nHowever, the protocol version number thing looks like something more\nfundamental.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 09:58:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "[email protected] writes:\n>> I noticed that 6.4 client did not talk to 6.3.2 server.\n>> Connection to database 'test' failed.\n>> Unsupported frontend protocol.\n>> \n>> I thought that we have kept the \"backward compatibility\" since we\n>> introduced \"protocol version\" in libpq?\n\nBackwards compatibility yes: a 6.4 server should be able to talk to\nan old client. You're asking about cross-version compatibility in the\nother direction, which is something we don't have. The connection\nprotocol is designed to let the server accommodate to the client, not\nvice versa --- the client tells the server its version, but not vice\nversa. I suppose the client might check for that particular error\nmessage after a connect failure and then try again with a lower version\nnumber ... but that's pretty messy.\n\nOn a practical level, the new libpq is not capable of talking to an old\nserver anyway --- some of the cleanups I made are critically dependent\non new protocol features, such as the 'Z' (ReadyForQuery) message.\n\nBruce Momjian <[email protected]> writes:\n> Might be my atttypmod changes. I did not make those version-sensitive. \n> I will do that now.\n\nYes, if we want to have backward compatibility as I just defined it,\nthen the backend will have to send atttypmod as either 2 or 4 bytes\ndepending on ProtocolVersion. Shouldn't be too hard. But I'm concerned\nthat you and I both missed that initially. We had better actually test\nthat the current backend sources will work with a 6.3.2-release frontend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Jul 1998 10:43:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change "
},
{
"msg_contents": "> I did some cross-platform testing today against 7/18 snapshot.\n> They workd fine!\n\nGreat. Thanks Tatsuo.\n\n - Tom\n",
"msg_date": "Wed, 22 Jul 1998 14:49:11 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] atttypmod now 32 bits,\n interface change"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Might be my atttypmod changes. I did not make those version-sensitive. \n> > I will do that now.\n> \n> Yes, if we want to have backward compatibility as I just defined it,\n> then the backend will have to send atttypmod as either 2 or 4 bytes\n> depending on ProtocolVersion. Shouldn't be too hard. But I'm concerned\n> that you and I both missed that initially. We had better actually test\n> that the current backend sources will work with a 6.3.2-release frontend.\n\nAlready done. We never passed atttypmod to the backend before 6.4, so\nthe change it just to pass it or not pass it, and Tom already did that. \nThe fact that the internal length was 2 and is not 4 is not relevant\nbecause we never passed it to the frontend in the past.\n\n\tif (PG_PROTOCOL_MAJOR(FrontendProtocol) >= 2)\n \tpq_putint(attrs[i]->atttypmod, sizeof(attrs[i]->atttypmod...\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 11:10:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Already done. We never passed atttypmod to the backend before 6.4, so\n> the change it just to pass it or not pass it, and Tom already did that. \n> The fact that the internal length was 2 and is not 4 is not relevant\n> because we never passed it to the frontend in the past.\n\nAh, right. Should check the code before opining that it's wrong ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Jul 1998 15:41:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change "
},
{
"msg_contents": "> \n> Hannu Krosing wrote:\n> \n> > > The days where every release fixed server crashes, or added a feature\n> > > that users were 'screaming for' may be a thing of the past.\n> >\n> > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > at least the canonic case used by access?\n> >\n> > My impression is that this has fallen somewhere between\n> > insightdist and Vadim.\n> \n> This is really big for the ODBCers. (And I suspect for JDBCers too.) Many\n> desktop libraries and end-user tools depend on this \"record set\" strategy to\n> operate effectively.\n> \n> I have put together a workable hack that runs just before cnfify(). The\n> option is activated through the SET command. Once activated, it identifies\n> queries with this particular multi-OR pattern generated by these RECORD SET\n> strategies. Qualified query trees are rewritten as multiple UNIONs. (One\n> for each OR grouping).\n> \n> The results are profound. Queries that used to scan tables because of the\n> ORs, now make use of any indexes. Thus, the size of the table has virtually\n> no effect on performance. Furthermore, queries that used to crash the\n> backend, now run in under a second.\n> \n> Currently the down sides are:\n> 1. If there is no usable index, performance is significantly worse. The\n> patch does not check to make sure that there is a usable index. I could use\n> some pointers on this.\n> \n> 2. Small tables are actually a bit slower than without the patch.\n> \n> 3. Not very elegant. I am looking for a more generalized solution.\n> I have lots of ideas, but I would need to know the backend much better before\n> attempting any of them. My favorite idea is before cnfify(), to factor the\n> OR terms and pull out the constants into a virtual (temporary) table spaces.\n> Then rewrite the query as a join. The optimizer will (should) treat the new\n> query accordingly. This assumes that an efficient factoring algorithm exists\n> and that temporary tables can exist in the heap.\n> \n> Illustration:\n> SELECT ... FROM tab WHERE\n> (var1 = const1 AND var2 = const2) OR\n> (var1 = const3 AND var2 = const4) OR\n> (var1 = const5 AND var2 = const6)\n> \n> SELECT ... FROM tab, tmp WHERE\n> (var1 = var_x AND var2 = var_y)\n> \n> tmp\n> var_x | var_y\n> --------------\n> const1|const2\n> const3|const4\n> const5|const6\n\nDavid, where are we on this? I know we have OR's using indexes. Do we\nstill need to look this as a fix, or are we OK. I have not gotten far\nenough in the optimizer to know how to fix the cnf'ify problem. \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 21 Aug 1998 23:53:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n\n> >\n> > Hannu Krosing wrote:\n> >\n> > > > The days where every release fixed server crashes, or added a feature\n> > > > that users were 'screaming for' may be a thing of the past.\n> > >\n> > > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > > at least the canonic case used by access?\n> > >\n> > > My impression is that this has fallen somewhere between\n> > > insightdist and Vadim.\n> >\n> > This is really big for the ODBCers. (And I suspect for JDBCers too.) Many\n> > desktop libraries and end-user tools depend on this \"record set\" strategy to\n> > operate effectively.\n> >\n> > I have put together a workable hack that runs just before cnfify(). The\n> > option is activated through the SET command. Once activated, it identifies\n> > queries with this particular multi-OR pattern generated by these RECORD SET\n> > strategies. Qualified query trees are rewritten as multiple UNIONs. (One\n> > for each OR grouping).\n> >\n> > The results are profound. Queries that used to scan tables because of the\n> > ORs, now make use of any indexes. Thus, the size of the table has virtually\n> > no effect on performance. Furthermore, queries that used to crash the\n> > backend, now run in under a second.\n> >\n> > Currently the down sides are:\n> > 1. If there is no usable index, performance is significantly worse. The\n> > patch does not check to make sure that there is a usable index. I could use\n> > some pointers on this.\n> >\n> > 2. Small tables are actually a bit slower than without the patch.\n> >\n> > 3. Not very elegant. I am looking for a more generalized solution.\n> > I have lots of ideas, but I would need to know the backend much better before\n> > attempting any of them. My favorite idea is before cnfify(), to factor the\n> > OR terms and pull out the constants into a virtual (temporary) table spaces.\n> > Then rewrite the query as a join. The optimizer will (should) treat the new\n> > query accordingly. This assumes that an efficient factoring algorithm exists\n> > and that temporary tables can exist in the heap.\n> >\n> > Illustration:\n> > SELECT ... FROM tab WHERE\n> > (var1 = const1 AND var2 = const2) OR\n> > (var1 = const3 AND var2 = const4) OR\n> > (var1 = const5 AND var2 = const6)\n> >\n> > SELECT ... FROM tab, tmp WHERE\n> > (var1 = var_x AND var2 = var_y)\n> >\n> > tmp\n> > var_x | var_y\n> > --------------\n> > const1|const2\n> > const3|const4\n> > const5|const6\n>\n> David, where are we on this? I know we have OR's using indexes. Do we\n> still need to look this as a fix, or are we OK. I have not gotten far\n> enough in the optimizer to know how to fix the\n\nBruce,\n\nIf the question is, have I come up with a solution for the cnf'ify problem: No\n\nIf the question is, is it still important: Very much yes.\n\nIt is essential for many RAD tools using remote data objects which make use of key\nsets. Your recent optimization of the OR list goes a long way, but inevitably\nusers are confronted with multi-part keys.\n\nWhen I look at the problem my head spins. I do not have the experience (yet?)\nwith the backend to be mucking around in the optimizer. As I see it, cnf'ify is\ndoing just what it is supposed to do. Boundless boolean logic.\n\nI think hope may lay though, in identifying each AND'ed group associated with a key\nand tagging it as a special sub-root node which cnf'ify does not penetrate. This\nnode would be allowed to pass to the later stages of the optimizer where it will be\nused to plan index scans. Easy for me to say.\n\nIn the meantime, I still have the patch that I described in prior email. It has\nworked well for us. Let me restate that. We could not survive without it!\nHowever, I do not feel that is a sufficiently functional approach that should be\nincorporated as a final solution. I will submit the patch if you, (anyone) does\nnot come up with a better solution. It is coded to be activated by a SET KSQO to\nminimize its reach.\n\n",
"msg_date": "Sun, 23 Aug 1998 19:55:29 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "> \n> \n> Bruce Momjian wrote:\n> \n> > >\n> > > Hannu Krosing wrote:\n> > >\n> > > > > The days where every release fixed server crashes, or added a feature\n> > > > > that users were 'screaming for' may be a thing of the past.\n> > > >\n> > > > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > > > at least the canonic case used by access?\n> > > >\n> > > > My impression is that this has fallen somewhere between\n> > > > insightdist and Vadim.\n> > >\n> > > This is really big for the ODBCers. (And I suspect for JDBCers too.) Many\n> > > desktop libraries and end-user tools depend on this \"record set\" strategy to\n> > > operate effectively.\n> > >\n> > > I have put together a workable hack that runs just before cnfify(). The\n> > > option is activated through the SET command. Once activated, it identifies\n> > > queries with this particular multi-OR pattern generated by these RECORD SET\n> > > strategies. Qualified query trees are rewritten as multiple UNIONs. (One\n> > > for each OR grouping).\n> > >\n> > > The results are profound. Queries that used to scan tables because of the\n> > > ORs, now make use of any indexes. Thus, the size of the table has virtually\n> > > no effect on performance. Furthermore, queries that used to crash the\n> > > backend, now run in under a second.\n> > >\n> > > Currently the down sides are:\n> > > 1. If there is no usable index, performance is significantly worse. The\n> > > patch does not check to make sure that there is a usable index. I could use\n> > > some pointers on this.\n> > >\n> > > 2. Small tables are actually a bit slower than without the patch.\n> > >\n> > > 3. Not very elegant. I am looking for a more generalized solution.\n> > > I have lots of ideas, but I would need to know the backend much better before\n> > > attempting any of them. My favorite idea is before cnfify(), to factor the\n> > > OR terms and pull out the constants into a virtual (temporary) table spaces.\n> > > Then rewrite the query as a join. The optimizer will (should) treat the new\n> > > query accordingly. This assumes that an efficient factoring algorithm exists\n> > > and that temporary tables can exist in the heap.\n> > >\n> > > Illustration:\n> > > SELECT ... FROM tab WHERE\n> > > (var1 = const1 AND var2 = const2) OR\n> > > (var1 = const3 AND var2 = const4) OR\n> > > (var1 = const5 AND var2 = const6)\n> > >\n> > > SELECT ... FROM tab, tmp WHERE\n> > > (var1 = var_x AND var2 = var_y)\n> > >\n> > > tmp\n> > > var_x | var_y\n> > > --------------\n> > > const1|const2\n> > > const3|const4\n> > > const5|const6\n> >\n> > David, where are we on this? I know we have OR's using indexes. Do we\n> > still need to look this as a fix, or are we OK. I have not gotten far\n> > enough in the optimizer to know how to fix the\n> \n> Bruce,\n> \n> If the question is, have I come up with a solution for the cnf'ify problem: No\n> \n> If the question is, is it still important: Very much yes.\n> \n> It is essential for many RAD tools using remote data objects which make use of key\n> sets. Your recent optimization of the OR list goes a long way, but inevitably\n> users are confronted with multi-part keys.\n> \n> When I look at the problem my head spins. I do not have the experience (yet?)\n> with the backend to be mucking around in the optimizer. As I see it, cnf'ify is\n> doing just what it is supposed to do. Boundless boolean logic.\n> \n> I think hope may lay though, in identifying each AND'ed group associated with a key\n> and tagging it as a special sub-root node which cnf'ify does not penetrate. This\n> node would be allowed to pass to the later stages of the optimizer where it will be\n> used to plan index scans. Easy for me to say.\n> \n> In the meantime, I still have the patch that I described in prior email. It has\n> worked well for us. Let me restate that. We could not survive without it!\n> However, I do not feel that is a sufficiently functional approach that should be\n> incorporated as a final solution. I will submit the patch if you, (anyone) does\n> not come up with a better solution. It is coded to be activated by a SET KSQO to\n> minimize its reach.\n> \n> \n\nOK, let me try this one.\n\nWhy is the system cnf'ifying the query. Because it wants to have a\nlist of qualifications that are AND'ed, so it can just pick the most\nrestrictive/cheapest, and evaluate that one first.\n\nIf you have:\n\n\t(a=b and c=d) or e=1\n\nIn this case, without cnf'ify, it has to evaluate both of them, because\nif one is false, you can't be sure another would be true. In the\ncnf'ify case, \n\n\t(a=b or e=1) and (c=d or e=1) \n\nIn this case, it can choose either, and act on just one, if a row fails\nto meet it, it can stop and not evaluate it using the other restriction.\n\nThe fact is that it is only going to use fancy join/index in one of the\ntwo cases, so it tries to pick the best one, and does a brute-force\nqualification test on the remaining item if the first one tried is true.\n\nThe problem is of course large where clauses can exponentially expand\nthis. What it really trying to do is to pick a cheapest restriction,\nbut the memory explosion and query failure are serious problems.\n\nThe issue is that it thinks it is doing something to help things, while\nit is actually hurting things.\n\nIn the ODBC case of:\n\n\t(x=3 and y=4) or\n\t(x=3 and y=5) or\n\t(x=3 and y=6) or ...\n\nit clearly is not going to gain anything by choosing any CHEAPEST path,\nbecause they are all the same in terms of cost, and the use by ODBC\nclients is hurting reliability.\n\nI am inclined to agree with David's solution of breaking apart the query\ninto separate UNION queries in certain cases. It seems to be the most\nlogical solution, because the cnf'ify code is working counter to its\npurpose in these cases.\n\nNow, the question is how/where to implement this. I see your idea of\nmaking the OR a join to a temp table that holds all the constants. \nAnother idea would be to do actual UNION queries:\n\n\tSELECT * FROM tab\n\tWHERE (x=3 and y=4)\n\tUNION\n\tSELECT * FROM tab\n\tWHERE (x=3 and y=5)\n\tUNION\n\tSELECT * FROM tab\n\tWHERE (x=3 and y=6) ...\n\nThis would work well for tables with indexes, but for a sequential scan,\nyou are doing a sequential scan for each UNION. Another idea is\nsubselects. Also, you have to make sure you return the proper rows,\nkeeping duplicates where they are in the base table, but not returning\nthem when the meet more than one qualification.\n\n\tSELECT * FROM tab\n\tWHERE (x,y) IN (SELECT 3, 4\n\t\t\tUNION\n\t\t\tSELECT 3, 5\n\t\t\tUNION\n\t\t\tSELECT 3, 6)\n\nI believe we actually support this. This is not going to use an index\non tab, so it may be slow if x and y are indexed.\n\nAnother more bizarre solution is:\n\n\tSELECT * FROM tab\n\tWHERE (x,y) = (SELECT 3, 4) OR\n\t (x,y) = (SELECT 3, 5) OR\n\t (x,y) = (SELECT 3, 6)\n\nAgain, I think we do this too. I don't think cnf'ify does anything with\nthis. I also believe \"=\" uses indexes on subselects, while IN does not\nbecause IN could return lots of rows, and an index is slower than a\nnon-index join on lots of rows. Of course, now that we index OR's.\n\nLet me ask another question. If I do:\n\n\tSELECT * FROM tab WHERE x=3 OR x=4\n\nit works, and uses indexes. Why can't the optimizer just not cnf'ify\nthings sometimes, and just do:\n\n\tSELECT * FROM tab\n\tWHERE\t(x=3 AND y=4) OR\n\t\t(x=3 AND y=5) OR\n\t\t(x=3 AND y=6)\n\nWhy can it handle x=3 OR x=4, but not the more complicated case above,\nwithout trying to be too smart? If x,y is a multi-key index, it could\nuse that quite easily. If not, it can do a sequentail scan and run the\ntests.\n\nAnother issue. To the optimizer, x=3 and x=y are totally different. In\nx=3, it is a column compared to a constant, while in x=y, it is a join. \nThat makes a huge difference.\n\nIn the case of (a=b and c=d) or e=1, you pick the best path and do the\na=b join, and throw in the e=1 entries. You can't easily do both joins,\nbecause you also need the e=1 stuff.\n\nI wounder what would happen if we prevent cnf'ifying of cases where the\nOR represent only column = constant restrictions.\n\nI meant to really go through the optimizer this month, but other backend\nitems took my time.\n\nCan someone run some tests on disabling the cnf'ify calls. It is my\nunderstanding that with the non-cnf-ify'ed query, it can't choose an\noptimial path, and starts to do either straight index matches,\nsequential scans, or cartesian products where it joins every row to\nevery other row looking for a match.\n\nLet's say we turn off cnf-ify just for non-join queries. Does that\nhelp?\n\nI am not sure of the ramifications of telling the optimizer it no longer\nhas a variety of paths to choose for evaluating the query.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 28 Aug 1998 23:44:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n\n> OK, let me try this one.\n>\n> Why is the system cnf'ifying the query. Because it wants to have a\n> list of qualifications that are AND'ed, so it can just pick the most\n> restrictive/cheapest, and evaluate that one first.\n>\n> If you have:\n>\n> (a=b and c=d) or e=1\n>\n> In this case, without cnf'ify, it has to evaluate both of them, because\n> if one is false, you can't be sure another would be true. In the\n> cnf'ify case,\n>\n> (a=b or e=1) and (c=d or e=1)\n>\n> In this case, it can choose either, and act on just one, if a row fails\n> to meet it, it can stop and not evaluate it using the other restriction.\n>\n> The fact is that it is only going to use fancy join/index in one of the\n> two cases, so it tries to pick the best one, and does a brute-force\n> qualification test on the remaining item if the first one tried is true.\n>\n> The problem is of course large where clauses can exponentially expand\n> this. What it really trying to do is to pick a cheapest restriction,\n> but the memory explosion and query failure are serious problems.\n>\n> The issue is that it thinks it is doing something to help things, while\n> it is actually hurting things.\n>\n> In the ODBC case of:\n>\n> (x=3 and y=4) or\n> (x=3 and y=5) or\n> (x=3 and y=6) or ...\n>\n> it clearly is not going to gain anything by choosing any CHEAPEST path,\n> because they are all the same in terms of cost, and the use by ODBC\n> clients is hurting reliability.\n>\n> I am inclined to agree with David's solution of breaking apart the query\n> into separate UNION queries in certain cases. It seems to be the most\n> logical solution, because the cnf'ify code is working counter to its\n> purpose in these cases.\n>\n> Now, the question is how/where to implement this. I see your idea of\n> making the OR a join to a temp table that holds all the constants.\n> Another idea would be to do actual UNION queries:\n>\n> SELECT * FROM tab\n> WHERE (x=3 and y=4)\n> UNION\n> SELECT * FROM tab\n> WHERE (x=3 and y=5)\n> UNION\n> SELECT * FROM tab\n> WHERE (x=3 and y=6) ...\n>\n> This would work well for tables with indexes, but for a sequential scan,\n> you are doing a sequential scan for each UNION.\n\nPractically speaking, the lack of an index concern, may not be justified. The reason\nthese queries are being generated, with this shape, is because remote data objects on the\nclient side are being told that a primary key exists on these tables. The object is told\nabout these keys in one of two ways.\n\n1. It queries the database for the primary key of the table. The ODBC driver serviced\nthis request by querying for the attributes used in {table_name}_pkey.\n\n2. The user manually specifies the primary key. In this case an actual index may not\nexist. (i.e. MS Access asks the user for this information if a primary key is not found\nin a table)\n\nThe second case is the only one that would cause a problem. Fortunately, the solution is\nsimple. Add a primary key index!\n\nMy only concern is to be able to accurately identify a query with the proper signature\nbefore rewriting it as a UNION. To what degree should this inspection be taken?\n\nBTW, I would not do the rewrite on OR's without AND's since you have fixed the OR's use\nof the index.\n\nThere is one other potential issue. My experience with using arrays in tables and UNIONS\ncreates problems. There are missing array comparison operators which are used by the\nimplied DISTINCT.\n\n> Another idea is\n> subselects. Also, you have to make sure you return the proper rows,\n> keeping duplicates where they are in the base table, but not returning\n> them when the meet more than one qualification.\n>\n> SELECT * FROM tab\n> WHERE (x,y) IN (SELECT 3, 4\n> UNION\n> SELECT 3, 5\n> UNION\n> SELECT 3, 6)\n>\n> I believe we actually support this. This is not going to use an index\n> on tab, so it may be slow if x and y are indexed.\n>\n> Another more bizarre solution is:\n>\n> SELECT * FROM tab\n> WHERE (x,y) = (SELECT 3, 4) OR\n> (x,y) = (SELECT 3, 5) OR\n> (x,y) = (SELECT 3, 6)\n>\n> Again, I think we do this too. I don't think cnf'ify does anything with\n> this. I also believe \"=\" uses indexes on subselects, while IN does not\n> because IN could return lots of rows, and an index is slower than a\n> non-index join on lots of rows. Of course, now that we index OR's.\n>\n> Let me ask another question. If I do:\n>\n> SELECT * FROM tab WHERE x=3 OR x=4\n>\n> it works, and uses indexes. Why can't the optimizer just not cnf'ify\n> things sometimes, and just do:\n>\n> SELECT * FROM tab\n> WHERE (x=3 AND y=4) OR\n> (x=3 AND y=5) OR\n> (x=3 AND y=6)\n>\n> Why can it handle x=3 OR x=4, but not the more complicated case above,\n> without trying to be too smart? If x,y is a multi-key index, it could\n> use that quite easily. If not, it can do a sequentail scan and run the\n> tests.\n>\n> Another issue. To the optimizer, x=3 and x=y are totally different. In\n> x=3, it is a column compared to a constant, while in x=y, it is a join.\n> That makes a huge difference.\n>\n> In the case of (a=b and c=d) or e=1, you pick the best path and do the\n> a=b join, and throw in the e=1 entries. You can't easily do both joins,\n> because you also need the e=1 stuff.\n>\n> I wounder what would happen if we prevent cnf'ifying of cases where the\n> OR represent only column = constant restrictions.\n>\n> I meant to really go through the optimizer this month, but other backend\n> items took my time.\n>\n> Can someone run some tests on disabling the cnf'ify calls. It is my\n> understanding that with the non-cnf-ify'ed query, it can't choose an\n> optimial path, and starts to do either straight index matches,\n> sequential scans, or cartesian products where it joins every row to\n> every other row looking for a match.\n>\n> Let's say we turn off cnf-ify just for non-join queries. Does that\n> help?\n>\n> I am not sure of the ramifications of telling the optimizer it no longer\n> has a variety of paths to choose for evaluating the query.\n\nI did not try this earlier because I thought it was too good to be true. I was right.\nI tried commenting out the normalize() function in the cnfify(). The EXPLAIN showed a\nsequential scan and the resulting tuple set was empty. Time will not allow me to dig\ninto this further this weekend.\n\nUnless you come up with a better solution, I am going to submit my patch on Monday to\nmake the Sept. 1st deadline. It includes a SET switch to activate the rewrite so as not\nto cause problems outside the ODBC users. We can either improve, it or yank it, by the\nOct. 1st deadline.\n\n",
"msg_date": "Sun, 30 Aug 1998 11:40:31 -0400",
"msg_from": "David Hartwig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Hello,\n\nAt 11.40 30/08/98 -0400, David Hartwig wrote:\n>> Why is the system cnf'ifying the query. Because it wants to have a\n>> list of qualifications that are AND'ed, so it can just pick the most\n>> restrictive/cheapest, and evaluate that one first.\n\nJust a small question about all this optimizations stuff. I'm not a\ndatabase expert but I think we are talking about a NP-complete problem.\nCould'nt we convert this optimization problem into another NP one that is\nknown to have a good solution ? For example for the traveling salesman\nproblem there's an alghoritm that provide a solution that's never more than\ntwo times the optimal one an provides results that are *really* near the\noptimal one most of the times. The simplex alghoritm may be another\nexample. I think that this kind of alghoritm would be better than a\ncollection ot tricks for special cases, and this tricks could be used\nanyway when special cases are detected. Furthermore I also know that exists\na free program I used in the past that provides this kind of optimizations\nfor chip design. I don't remember the exact name of the program but I\nremember it came from Berkeley university. Of course may be I'm totally\nmissing the point.\n\nHope it helps !\n\nBye!\n\n\tDr. Sbragion Denis\n\tInfoTecna\n\tTel, Fax: +39 39 2324054\n\tURL: http://space.tin.it/internet/dsbragio\n",
"msg_date": "Mon, 31 Aug 1998 08:53:12 +0200",
"msg_from": "Sbragion Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "This is an old message, but still relivant. I belive 6.6 will have much\nbetter OR memory usage.\n\n> \n> \n> Bruce Momjian wrote:\n> \n> > >\n> > > Hannu Krosing wrote:\n> > >\n> > > > > The days where every release fixed server crashes, or added a feature\n> > > > > that users were 'screaming for' may be a thing of the past.\n> > > >\n> > > > Is anyone working on fixing the exploding optimisations for many OR-s,\n> > > > at least the canonic case used by access?\n> > > >\n> > > > My impression is that this has fallen somewhere between\n> > > > insightdist and Vadim.\n> > >\n> > > This is really big for the ODBCers. (And I suspect for JDBCers too.) Many\n> > > desktop libraries and end-user tools depend on this \"record set\" strategy to\n> > > operate effectively.\n> > >\n> > > I have put together a workable hack that runs just before cnfify(). The\n> > > option is activated through the SET command. Once activated, it identifies\n> > > queries with this particular multi-OR pattern generated by these RECORD SET\n> > > strategies. Qualified query trees are rewritten as multiple UNIONs. (One\n> > > for each OR grouping).\n> > >\n> > > The results are profound. Queries that used to scan tables because of the\n> > > ORs, now make use of any indexes. Thus, the size of the table has virtually\n> > > no effect on performance. Furthermore, queries that used to crash the\n> > > backend, now run in under a second.\n> > >\n> > > Currently the down sides are:\n> > > 1. If there is no usable index, performance is significantly worse. The\n> > > patch does not check to make sure that there is a usable index. I could use\n> > > some pointers on this.\n> > >\n> > > 2. Small tables are actually a bit slower than without the patch.\n> > >\n> > > 3. Not very elegant. I am looking for a more generalized solution.\n> > > I have lots of ideas, but I would need to know the backend much better before\n> > > attempting any of them. My favorite idea is before cnfify(), to factor the\n> > > OR terms and pull out the constants into a virtual (temporary) table spaces.\n> > > Then rewrite the query as a join. The optimizer will (should) treat the new\n> > > query accordingly. This assumes that an efficient factoring algorithm exists\n> > > and that temporary tables can exist in the heap.\n> > >\n> > > Illustration:\n> > > SELECT ... FROM tab WHERE\n> > > (var1 = const1 AND var2 = const2) OR\n> > > (var1 = const3 AND var2 = const4) OR\n> > > (var1 = const5 AND var2 = const6)\n> > >\n> > > SELECT ... FROM tab, tmp WHERE\n> > > (var1 = var_x AND var2 = var_y)\n> > >\n> > > tmp\n> > > var_x | var_y\n> > > --------------\n> > > const1|const2\n> > > const3|const4\n> > > const5|const6\n> >\n> > David, where are we on this? I know we have OR's using indexes. Do we\n> > still need to look this as a fix, or are we OK. I have not gotten far\n> > enough in the optimizer to know how to fix the\n> \n> Bruce,\n> \n> If the question is, have I come up with a solution for the cnf'ify problem: No\n> \n> If the question is, is it still important: Very much yes.\n> \n> It is essential for many RAD tools using remote data objects which make use of key\n> sets. Your recent optimization of the OR list goes a long way, but inevitably\n> users are confronted with multi-part keys.\n> \n> When I look at the problem my head spins. I do not have the experience (yet?)\n> with the backend to be mucking around in the optimizer. As I see it, cnf'ify is\n> doing just what it is supposed to do. Boundless boolean logic.\n> \n> I think hope may lay though, in identifying each AND'ed group associated with a key\n> and tagging it as a special sub-root node which cnf'ify does not penetrate. This\n> node would be allowed to pass to the later stages of the optimizer where it will be\n> used to plan index scans. Easy for me to say.\n> \n> In the meantime, I still have the patch that I described in prior email. It has\n> worked well for us. Let me restate that. We could not survive without it!\n> However, I do not feel that is a sufficiently functional approach that should be\n> incorporated as a final solution. I will submit the patch if you, (anyone) does\n> not come up with a better solution. It is coded to be activated by a SET KSQO to\n> minimize its reach.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 16:10:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Re: [HACKERS] changes in 6.4"
},
{
"msg_contents": "Hi All\n\nI'm a bit lost! Where can I find documentation on accessing postgres \nfrom inside PERL (5)?\n\nAny help will be appreciated. (The thing is, I'm sure I've seen the info \nsomewhere, but for the life of me I can't remember where...)\n\nThanks\n\nJason Doller\n",
"msg_date": "Sun, 19 Sep 1999 13:17:49 +0200",
"msg_from": "\"Jason Doller\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "PERL"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Jason Doller wrote:\n\n> I'm a bit lost! Where can I find documentation on accessing postgres \n> from inside PERL (5)?\n> \n> Any help will be appreciated. (The thing is, I'm sure I've seen the info \n> somewhere, but for the life of me I can't remember where...)\n\nUnder the source tree, go to the interfaces directory, and there is a\ndirectory for the perl interface (Pg.pm). You have to enable the Perl\noption when you run configure, and you will also have to install the\nmodule as root, since it gets installed under the module hierarchy\nwherever you have Perl installed. Then you only need to do a 'perldoc Pg'\nto see the documentation on it. See the build instructions for more\ninformation on the how to install the Perl module.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy/\n-----------------------------------------------------------------------\nDon't let your mind wander -- it's too little to be let out alone.\n\n",
"msg_date": "Sun, 19 Sep 1999 10:57:40 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] PERL"
}
] |
[
{
"msg_contents": "subscribe\n\n\n",
"msg_date": "Mon, 13 Jul 1998 11:07:18 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "On Mon, 13 Jul 1998, Davide Libenzi wrote:\n\n> Is there an HPUX port of Postgres SQL ?\n\n\tYes...\n\n\n",
"msg_date": "Mon, 13 Jul 1998 10:25:03 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] HPUX Port"
},
{
"msg_contents": "Is there an HPUX port of Postgres SQL ?\n\nHi, David\n\n\n",
"msg_date": "Mon, 13 Jul 1998 16:11:53 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": false,
"msg_subject": "HPUX Port"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Mon, 13 Jul 1998, Davide Libenzi wrote:\n>> Is there an HPUX port of Postgres SQL ?\n\n> \tYes...\n\nThere are a number of minor porting problems with 6.3.2 on HPUX;\nsee my message in the pgsql-patches archives for 21 Apr 1998. All\nexcept one item have been addressed in the current development sources.\n\nThe \"one item\" is that configure doesn't know about having to look\nin /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\nit's worth the trouble to fix that or not, vs. just putting a note\nin the INSTALL directions to manually correct the config.h file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 11:45:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HPUX Port "
},
{
"msg_contents": "On Mon, 13 Jul 1998, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Mon, 13 Jul 1998, Davide Libenzi wrote:\n> >> Is there an HPUX port of Postgres SQL ?\n> \n> > \tYes...\n> \n> There are a number of minor porting problems with 6.3.2 on HPUX;\n> see my message in the pgsql-patches archives for 21 Apr 1998. All\n> except one item have been addressed in the current development sources.\n> \n> The \"one item\" is that configure doesn't know about having to look\n> in /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\n\n\tIs this something we can just ptu a check for tha tlibrary into\nthe configure script and all will be well?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 13 Jul 1998 21:45:04 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] HPUX Port "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> The \"one item\" is that configure doesn't know about having to look\n>> in /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\n\n> \tIs this something we can just ptu a check for tha tlibrary into\n> the configure script and all will be well?\n\nIn other words, Just Do It, eh? I suppose you're right.\n\nThe attached does the right thing on HPUX 9 and looks fairly harmless\nfor other platforms.\n\n\t\t\tregards, tom lane\n\n\n*** src/configure.in.orig\tSun Jul 12 12:05:02 1998\n--- src/configure.in\tMon Jul 13 20:57:37 1998\n***************\n*** 582,588 ****\n \t AC_CHECK_LIB(m, cbrt, AC_DEFINE(HAVE_CBRT)))\n AC_CHECK_FUNC(rint,\n \t AC_DEFINE(HAVE_RINT),\n! \t AC_CHECK_LIB(m, rint, AC_DEFINE(HAVE_RINT)))\n \n dnl Check for X libraries\n \n--- 582,595 ----\n \t AC_CHECK_LIB(m, cbrt, AC_DEFINE(HAVE_CBRT)))\n AC_CHECK_FUNC(rint,\n \t AC_DEFINE(HAVE_RINT),\n! [\n! # On HPUX 9, rint() is not in regular libm.a but in /lib/pa1.1/libm.a\n! SPECIALMATHLIB=\"\"\n! if [[ -r /lib/pa1.1/libm.a ]] ; then\n! SPECIALMATHLIB=\"-L /lib/pa1.1 -lm\"\n! fi\n! \t AC_CHECK_LIB(m, rint, AC_DEFINE(HAVE_RINT), , $SPECIALMATHLIB)\n! ])\n \n dnl Check for X libraries\n \n",
"msg_date": "Mon, 13 Jul 1998 21:06:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HPUX Port "
},
{
"msg_contents": "Applied. I see it is needed by configure so it finds rint on hpux. I\nassume you have looked at makefiles/Makefile.hpux.\n\n> The Hermit Hacker <[email protected]> writes:\n> >> The \"one item\" is that configure doesn't know about having to look\n> >> in /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\n> \n> > \tIs this something we can just ptu a check for tha tlibrary into\n> > the configure script and all will be well?\n> \n> In other words, Just Do It, eh? I suppose you're right.\n> \n> The attached does the right thing on HPUX 9 and looks fairly harmless\n> for other platforms.\n> \n> \t\t\tregards, tom lane\n> \n> \n> *** src/configure.in.orig\tSun Jul 12 12:05:02 1998\n> --- src/configure.in\tMon Jul 13 20:57:37 1998\n> ***************\n> *** 582,588 ****\n> \t AC_CHECK_LIB(m, cbrt, AC_DEFINE(HAVE_CBRT)))\n> AC_CHECK_FUNC(rint,\n> \t AC_DEFINE(HAVE_RINT),\n> ! \t AC_CHECK_LIB(m, rint, AC_DEFINE(HAVE_RINT)))\n> \n> dnl Check for X libraries\n> \n> --- 582,595 ----\n> \t AC_CHECK_LIB(m, cbrt, AC_DEFINE(HAVE_CBRT)))\n> AC_CHECK_FUNC(rint,\n> \t AC_DEFINE(HAVE_RINT),\n> ! [\n> ! # On HPUX 9, rint() is not in regular libm.a but in /lib/pa1.1/libm.a\n> ! SPECIALMATHLIB=\"\"\n> ! if [[ -r /lib/pa1.1/libm.a ]] ; then\n> ! SPECIALMATHLIB=\"-L /lib/pa1.1 -lm\"\n> ! fi\n> ! \t AC_CHECK_LIB(m, rint, AC_DEFINE(HAVE_RINT), , $SPECIALMATHLIB)\n> ! ])\n> \n> dnl Check for X libraries\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 22:59:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HPUX Port"
},
{
"msg_contents": "On Mon, 13 Jul 1998, Bruce Momjian wrote:\n\n> Applied. I see it is needed by configure so it finds rint on hpux. I\n> assume you have looked at makefiles/Makefile.hpux.\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > >> The \"one item\" is that configure doesn't know about having to look\n> > >> in /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\n\nWhat is rint? I just checked HP-UX 9.0 running on my 360 and it's not\nthere. Is it a special thing for PA?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2 \n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n",
"msg_date": "Tue, 14 Jul 1998 09:26:41 -0400 (edt)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HPUX Port"
},
{
"msg_contents": "On Tue, 14 Jul 1998, Vince Vielhaber wrote:\n\n> On Mon, 13 Jul 1998, Bruce Momjian wrote:\n> \n> > Applied. I see it is needed by configure so it finds rint on hpux. I\n> > assume you have looked at makefiles/Makefile.hpux.\n> > \n> > > The Hermit Hacker <[email protected]> writes:\n> > > >> The \"one item\" is that configure doesn't know about having to look\n> > > >> in /lib/pa1.1 to find rint() on HPUX 9. I've been debating whether\n> \n> What is rint? I just checked HP-UX 9.0 running on my 360 and it's not\n> there. Is it a special thing for PA?\n> \n> Vince.\n\n\nDESCRIPTION\n The rint() and the rintf() functions return the integral value\n (repre-sented as a double or float precision number) nearest to x\n according to the prevailing rounding mode.\n\n\n",
"msg_date": "Tue, 14 Jul 1998 09:28:30 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] HPUX Port"
},
{
"msg_contents": "On Mon, 13 Jul 1998, Bruce Momjian wrote:\n>>>> Applied. I see it is needed by configure so it finds rint on hpux. I\n>>>> assume you have looked at makefiles/Makefile.hpux.\n\nYes, the HPUX makefile was fixed some time ago. The last piece of the\npuzzle was to teach configure about it.\n\nThe Hermit Hacker <[email protected]> writes:\n> On Tue, 14 Jul 1998, Vince Vielhaber wrote:\n>> What is rint? I just checked HP-UX 9.0 running on my 360 and it's not\n>> there. Is it a special thing for PA?\n> DESCRIPTION\n> The rint() and the rintf() functions return the integral value\n> (repre-sented as a double or float precision number) nearest to x\n> according to the prevailing rounding mode.\n\nrint() didn't use to be a standard part of libm, but I think it's\nmandated by recent versions of the IEEE float math spec. In HPUX 10,\nit's part of the standard math library libm. In HPUX 9, it's not in\nthe standard libm but is in the PA1.1-only libm that's kept in\n/lib/pa1.1. The patches we're talking about have to do with configuring\nPostgres to use that math library so it can use the native version of\nrint().\n\nI have no idea whether rint() is available anywhere for Series 300\nmachines. You don't have to worry too much if not; there is a\nsubstitute routine available in the Postgres distribution.\n\nBTW, does anyone have access to HPUX 9 running on a PA1.0 processor, ie\nan old Series 800 machine? Is the /lib/pa1.1 directory even present in\nsuch an installation? It suddenly occurs to me that we may need a\nsmarter approach to dealing with /lib/pa1.1, if that directory could be\npresent on machines that can't use the code in it. The current setup\nin configure and Makefile.hpux will do the right thing if /lib/pa1.1\nis not there at all, but if it is there on a 1.0 machine then you'd\nend up with an unusable executable...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Jul 1998 10:10:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HPUX Port "
}
] |
[
{
"msg_contents": "> > Not that we have been sitting on our hands, but we have been waiting for the\n> > FE/BE protocol to stabilize before updating the ODBC driver to the 6.4\n> > specs. Have we reached this point?\n> \n> Of course, beta does not start until Sep 1, so it is possible to wait\n> some more to see of other things change before updating things, but\n> currently, there are no open items I know about.\n\nI have also renamed some of the client structure names used internally\nby libpq. adtid, adtsize were too strange/confusing for me. Now they\nare typid and typlen. Should help clarify why atttypmod is needed,\nbecause many types, varchar(), don't have TYPE sizes.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 11:18:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
}
] |
[
{
"msg_contents": "> I now no longer get the badNode error message but the query does not work:\n> \n> >testing=> create table test (\n> >testing-> number int2,\n> >testing-> in_words text\n> >testing-> );\n> >CREATE\n> >testing=> insert into test values (1,'one');\n> >INSERT 147786 1\n> >testing=> insert into test values (2,'two');\n> >INSERT 147787 1\n> >testing=> insert into test values (3,'three');\n> >INSERT 147788 1\n> >testing=> insert into test values (4,'four');\n> >INSERT 147789 1\n> >testing=> select * from test;\n> >number|in_words\n> >------+--------\n> > 1|one\n> > 2|two\n> > 3|three\n> > 4|four\n> >(4 rows)\n> >\n> >testing=> create view on_test as select *,number * number as \"Number\n> >Squared\" >from test;\n> >CREATE\n> >testing=> select * from on_test;\n> >number|in_words|Number Squared\n> >------+--------+--------------\n> > | |\n> >(1 row)\n> >\n> >testing=> select *,number * number as \"Number Squared\" from pants;\n\nThis user already runs a patched 6.3.2 for the AS fix above.\n\n> >number|in_words|Number Squared\n> >------+--------+--------------\n> > 1|one | 1\n> > 2|two | 4\n> > 3|three | 9\n> > 4|four | 16\n> >(4 rows)\n> \n> Is this a patch thing (i.e. I applied incorrectly) or did the patch not\n> work for me?\n> \n> Best regards,\n\nI think you did it right. Looks like we have a problem with views:\n\n\ttest=> create view xxyz as select usesysid * usesysid from pg_shadow;\n\tCREATE\n\ttest=> select * from xxyz;\n\t?column?\n\t--------\n\t \n\t(1 row)\n\nI will add to TODO list. The view/rewrite system is being overhauled by\nJan, hopefully for 6.4.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 14:59:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with your patch (Sorry)."
}
] |
[
{
"msg_contents": "[I am sending this to the hackers list, and the GEQO author.]\n\nYikes. This user has a query that causes the GEQO optimizer to really\ngo into orbit. According to the user, it consumes 135MB of memory\nbefore failing.\n\nAny comments?\n\n> On Mon, 13 Jul 1998, you wrote:\n> >That is strange that GEQO is failing on this. I have never heard of\n> >this happening. In fact, geqo was designed for large number of table\n> >joins.\n> >\n> >Can you send me a reproducable case that I can test with?\n> \n> My first mail was containing such a sample. If you loosed it, I've tried to \n> rewrote (I am at home) it at the end of this mail.\n> \n> Here is the query:\n> CREATE TABLE client (nom varchar not null, passwd varchar not null,\n> peut_creer bool not null, peut_lire bool not null, peut_stat bool not null,\n> est_admin bool, est_fournisseur bool not null, est_client bool not null,\n> raison_social varchar, contact varchar, adresse varchar, telephone varchar,\n> fax varchar, adr_facture varchar);\n> \n> CREATE TABLE type (nom varchar not null, descr varchar not null);\n> \n> CREATE TABLE offre (client oid, tipe oid, zone5 oid, dest5 oid, date_creation\n> datetime, valide_depuis datetime, valide_jusqua datetime, fichier oid,\n> commission float);\n> \n> CREATE TABLE a_lut (offre oid, client oid, date_lecture datetime);\n> \n> CREATE TABLE prix (offre oid, valeur float, nb_jours int, valide_de datetime,\n> valide_a datetime);\n> \n> CREATE TABLE zone5 (nom varchar, zone4 oid, prix float);\n> CREATE TABLE zone4 (nom varchar, zone3 oid, prix float);\n> CREATE TABLE zone3 (nom varchar, zone2 oid, prix float);\n> CREATE TABLE zone2 (nom varchar, zone1 oid, prix float);\n> CREATE TABLE zone1 (nom varchar, prix float);\n> \n> CREATE TABLE dest5 (nom varchar, dest4 oid);\n> CREATE TABLE dest4 (nom varchar, dest3 oid);\n> CREATE TABLE dest3 (nom varchar, dest2 oid);\n> CREATE TABLE dest2 (nom varchar, dest1 oid);\n> CREATE TABLE dest1 (nom varchar);\n> \n> SELECT offre.oid as offre_oid,offre.client as offre_client,\n> offre.date_creation as offre_date_creation,\n> offre.valide_depuis as offre_valide_depuis,\n> offre.valide_jusqua as offre_valide_jusqua,\n> offre.commission as offre_commission,\n> type.oid as type_oid, type.nom as type_nom,\n> dest5.oid as dest5_oid,dest5.nom as dest5_nom,dest4.oid as dest4_oid,\n> dest4.nom as dest4_nom,dest3.oid as dest3_oid,dest3.nom as dest3_nom,\n> dest2.oid as dest2_oid,dest2.nom as dest2_nom,dest1.oid as dest1_oid,\n> dest1.nom as dest1_nom, zone5.oid as zone5_oid,zone5.nom as zone5_nom,\n> zone4.oid as zone4_oid, zone4.nom as zone4_nom,zone3.oid as zone3_oid,\n> zone3.nom as zone3_nom, zone2.oid as zone2_oid,zone2.nom as zone2_nom,\n> zone1.oid as zone1_oid, zone1.nom as zone1_nom FROM\n> offre,type,dest5,dest4,dest3,dest2,dest1,zone5,zone4,zone3,zone2,zone1 \n> WHERE offre.tipe=type.oid AND\n> offre.dest5=dest5.oid AND dest5.dest4=dest4.oid AND dest4.dest3=dest3.oid AND\n> dest3.dest2=dest2.oid AND dest2.dest1=dest1.oid \n> offre.zone5=zone5.oid AND zone5.zone4=zone4.oid AND zone4.zone3=zone3.oid AND\n> zone3.zone2=zone2.oid AND zone2.zone1=zone1.oid \n> \n> BOOM!!!!\n> \n> \n> best regards.\n> --\n> -�) Patrick Valsecchi /\\\\ \n> _\\_v http://dante.urbanet.ch/~patrick/index.html\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 15:06:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] SQL optimisation dead loop"
}
] |
[
{
"msg_contents": "I'm planning to use a \"sequence\" object to allow multiple clients of a\nshared database to label table records with guaranteed-increasing serial\nnumbers. (The underlying problem is to let all the clients efficiently\nfind changes that any one of them makes. Every insertion or update will\nassign a new sequence number to each new or modified record; then the\nwriter must issue a NOTIFY. Upon being notified, each client can read\nall the newly-modified records with\n\tSELECT ... FROM table WHERE seqno > lastseqno;\n\tlastseqno := max(seqno seen in retrieved records);\nwhere each client maintains a local variable lastseqno that's initially\nzero. This should be fast if I provide an index on the seqno field.\nBTW, does anyone know a better solution to this problem?)\n\nWhat I noticed is that there's no good way to find out the current\nsequence number value. The \"currval\" operator is no help because it\nonly tells you the last sequence number assigned in this client process\n(and in fact it fails completely if used in a client that never executes\nnextval because it is only a reader not a writer). The only way I can\nsee to do it reliably is to call nextval, thereby creating a gap in the\nsequence (not a problem for my application) and wasting a sequence value\n(definitely a problem if this is done a lot, since the scheme will fail\nif the sequence object wraps around).\n\nI think sequences ought to provide a \"real\" currval that reads the\ncurrent state of the sequence object from the database, thereby\nreturning the globally latest-assigned sequence value without depending\non any local state. (In the presence of caching this would produce the\nlatest value reserved by any backend, one which might not yet have been\nused by that backend. But you can't use caching anyway if you depend on\nthe values to be assigned sequentially on a global basis.)\n\nSo far I haven't found any case where my application actually *needs* to\nknow the highest sequence number, so I'm not motivated to fix it (yet).\nBut I think this ought to be on the TODO list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 16:29:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequence objects have no global currval operator?"
},
{
"msg_contents": "How about SELECT * FROM sequence_table_name? Sequence numbers are\nstored in their own tables.\n\n> I'm planning to use a \"sequence\" object to allow multiple clients of a\n> shared database to label table records with guaranteed-increasing serial\n> numbers. (The underlying problem is to let all the clients efficiently\n> find changes that any one of them makes. Every insertion or update will\n> assign a new sequence number to each new or modified record; then the\n> writer must issue a NOTIFY. Upon being notified, each client can read\n> all the newly-modified records with\n> \tSELECT ... FROM table WHERE seqno > lastseqno;\n> \tlastseqno := max(seqno seen in retrieved records);\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 17:50:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Sequence objects have no global currval operator?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> How about SELECT * FROM sequence_table_name?\n\nAh, of course. The man page for CREATE SEQUENCE only mentions getting\nthe sequence parameters that way, but you can get the last_value as\nwell, which is exactly what I need.\n\nMaybe I'll submit a documentation change to make this clearer for the\nnext guy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 18:02:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Sequence objects have no global currval operator? "
}
] |
[
{
"msg_contents": "As far as I can tell from EXPLAIN, there isn't any optimization done\ncurrently on queries involving the min or max of an indexed field.\nWhat I'm interested in is predecessor/successor queries, eg, \"find\nthe largest value less than X\". In SQL this becomes\n\n\tSELECT max(field1) FROM table WHERE field1 < X\n\n(for a constant X). Currently Postgres always seems to read all the\ntable records with field1 < X to execute this query.\n\nNow, if field1 has a btree index then it should be possible to answer\nthis query with just a probe into the index, never reading any table\nentries at all. But that implies understanding the semantics of max()\nand its relationship to the ordering used by the index, so I can see\nthat teaching Postgres to do this in a type-independent way might be\npainful.\n\nFor now, I can live with scanning all the table entries, but it would be\nnice to know that someone is working on this and it'll be there by the\ntime my tables get huge ;-). I see something about\n\t* Use indexes in ORDER BY, min(), max()(Costin Oproiu)\nin the TODO list, but is this actively being worked on, and will it\nsolve my problem or just handle simpler cases?\n\nAlternatively, is there a better way to do predecessor/successor\nqueries in SQL?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Jul 1998 16:53:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone working on optimizing subset min/max queries?"
},
{
"msg_contents": "> For now, I can live with scanning all the table entries, but it would be\n> nice to know that someone is working on this and it'll be there by the\n> time my tables get huge ;-). I see something about\n> \t* Use indexes in ORDER BY, min(), max()(Costin Oproiu)\n> in the TODO list, but is this actively being worked on, and will it\n> solve my problem or just handle simpler cases?\n> \n> Alternatively, is there a better way to do predecessor/successor\n> queries in SQL?\n\nCostin is not working on it currently, and I have removed his name from\nthe item. I know of no one working on it, though it is requested every\nso often.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 17:31:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Anyone working on optimizing subset min/max queries?"
}
] |
[
{
"msg_contents": "> \n> On 13-Jul-98 Bruce Momjian wrote:\n> > \n> > Oops, forget the previous patch. Here is the real fix:\n> > \n> > -- \n> Thanks very much for all your help. This seems to have solved the problem. I'll let you kn\n> ow if I run into anything else weird- One question - A number of views I have seem to viol\n> ate the new max query plan (Why did it shrink so dramatically) - I know that this will be\n> fixed in 6.4 - but I need a solution for the short term:\n> \n> Earlier I took postgres and replaced all the 8192 with 16384 (8k->16K) this compiled and \n> seemed to work fine - My basic question is - how ill advised is this? Is there a better wa\n> y? \n\nNot all fields were dumped in pre 6.3. Now, more data is dumped per\nview.\n\nNot sure. Cc'ed to hackers.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 13 Jul 1998 20:11:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] more nodeError problems and general view failures!"
}
] |
[
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> \"Thomas G. Lockhart\" <[email protected]> writes:\n> > Should we ask Tatsuo to do some mixed-endian tests, or is\n> > that area completely unchanged from v6.3?\n> \n> I don't think I broke anything in that regard ... but more testing is\n> always a good thing. If Tatsuo-san can spare the time, it would be\n> appreciated.\n> \n Can't say as I really know that much about Japanese, but from what\nI know about similar constructs in other languages, are you actually\nasking if Tatsuo's son can do the tests? :)\n\n-Brandon :)\n",
"msg_date": "Mon, 13 Jul 1998 22:39:25 -0500 (CDT)",
"msg_from": "Brandon Ibach <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] atttypmod now 32 bits, interface change"
}
] |
[
{
"msg_contents": "\n>Bruce Momjian <[email protected]> writes:\n>> How about SELECT * FROM sequence_table_name?\n>\n>Ah, of course. The man page for CREATE SEQUENCE only mentions getting\n>the sequence parameters that way, but you can get the last_value as\n>well, which is exactly what I need.\n\nWhat do you think of making currval return exactly this, only in the \ncase where nextval was not yet called by this client ?\n\nI don't think anybody does rely on currval returning null iff nextval was not yet called\nin his current session.\n\nAndreas\n\n\n",
"msg_date": "Tue, 14 Jul 1998 10:18:24 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Sequence objects have no global currval operator? "
},
{
"msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n> What do you think of making currval return exactly this, only in the \n> case where nextval was not yet called by this client ?\n\nI don't think that would be helpful. If what you want is last_value\nthen the *only* safe way to get it is to use SELECT last_value.\nUsing currval in the way you suggest would be asking for trouble ---\nyour code will work until you add a nextval somewhere in the same\nclient, and then it will fail. Subtly.\n\nAs defined, currval is only useful for specialized uses, such as\nassigning the same newly-allocated sequence number to multiple\nfields or table rows. For example you could do\n\tINSERT INTO table1 VALUES(nextval('seq'), ....);\n\tINSERT INTO table2 VALUES(currval('seq'), ....);\n\tINSERT INTO table3 VALUES(currval('seq'), ....);\nThis is perfectly correct and safe: all three tables will get the same\nuniquely-allocated sequence number regardless of what any other clients\nmay be doing. You could also read back the assigned value with\n\tSELECT nextval('seq');\nand then insert the value literally into subsequent commands, but\nthat way requires an extra round trip to the server.\n\ncurrval is not useful for inquiring about what other clients are doing,\nand I think we are best off to leave it that way to avoid confusion.\nI was only complaining because I didn't understand about last_value\nat the start of this thread.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Jul 1998 10:28:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Sequence objects have no global currval operator? "
}
] |
[
{
"msg_contents": "This is said in Oracle7 Server Concepts Manual, Data Concurrency, \nAdditional Considerations for Serializable Isolation: \n \n--- \nBoth read committed and serializable transactions use row-level locking, and \nboth will wait if they try to change a row updated by an uncommitted \nconcurrent transaction. The second transaction that tries to update a given \nrow waits for the other transaction to commit or rollback and release its \nlock. If that other transaction rolls back, the waiting transaction \n(regardless of its isolation mode) can proceed to change the previously \nlocked row, as if the other transaction had not existed. \n \nHowever, read committed and serializable transactions behave differently if \nthe other (blocking) transaction commits. When the other transaction commits \nand releases its locks, a read committed transaction will proceed with its \nintended update... \n^^^^^^^^ \n--- \n \nWhat does this mean? Will Oracle update this row (just updated by other \nXaction)? In any case or only if qualification is ok for this row now \n(qual was ok for unchanged version of row but could be changed by\nconcurrent Xaction)?\n\nCould someone run in Oracle test below?\n\n1. CREATE TABLE test (x integer, y integer)\n2. INSERT INTO test VALUES (1, 1);\n INSERT INTO test VALUES (1, 2);\n INSERT INTO test VALUES (3, 2);\n3. run two session T1 and T2 (in read committed mode)\n4. in session T2 run\n UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n5. in session T1 run\n UPDATE test SET y = 3 WHERE x = 1;\n6. in session T2 run\n COMMIT;\n7. in session T1 run\n SELECT * FROM test; -- results?\n8. in session T1 run\n COMMIT;\n9. now in session T2 run\n UPDATE test SET x = 2;\n10. in session T1 run\n UPDATE test SET y = 4 WHERE x = 1;\n11. in session T2 run\n COMMIT;\n12. in session T1 run\n SELECT * FROM test; -- results?\n\nTIA,\n\tVadim\n",
"msg_date": "Tue, 14 Jul 1998 19:04:14 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Q about read committed in Oracle..."
}
] |
[
{
"msg_contents": "Sorry for re-posting - this message has right charset...\n\nThis is said in Oracle7 Server Concepts Manual, Data Concurrency, \nAdditional Considerations for Serializable Isolation: \n \n--- \nBoth read committed and serializable transactions use row-level locking, and \nboth will wait if they try to change a row updated by an uncommitted \nconcurrent transaction. The second transaction that tries to update a given \nrow waits for the other transaction to commit or rollback and release its \nlock. If that other transaction rolls back, the waiting transaction \n(regardless of its isolation mode) can proceed to change the previously \nlocked row, as if the other transaction had not existed. \n \nHowever, read committed and serializable transactions behave differently if \nthe other (blocking) transaction commits. When the other transaction commits \nand releases its locks, a read committed transaction will proceed with its \nintended update... \n^^^^^^^^ \n--- \n \nWhat does this mean? Will Oracle update this row (just updated by other \nXaction)? In any case or only if qualification is ok for this row now \n(qual was ok for unchanged version of row but could be changed by\nconcurrent Xaction)?\n\nCould someone run in Oracle test below?\n\n1. CREATE TABLE test (x integer, y integer)\n2. INSERT INTO test VALUES (1, 1);\n INSERT INTO test VALUES (1, 2);\n INSERT INTO test VALUES (3, 2);\n3. run two session T1 and T2 (in read committed mode)\n4. in session T2 run\n UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n5. in session T1 run\n UPDATE test SET y = 3 WHERE x = 1;\n6. in session T2 run\n COMMIT;\n7. in session T1 run\n SELECT * FROM test; -- results?\n8. in session T1 run\n COMMIT;\n9. now in session T2 run\n UPDATE test SET x = 2;\n10. in session T1 run\n UPDATE test SET y = 4 WHERE x = 1;\n11. in session T2 run\n COMMIT;\n12. in session T1 run\n SELECT * FROM test; -- results?\n\nTIA,\n Vadim\n",
"msg_date": "Tue, 14 Jul 1998 19:14:10 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Q about read committed in Oracle..."
},
{
"msg_contents": "On Tue, Jul 14, 1998 at 07:14:10PM +0800, Vadim Mikheev wrote:\n> Could someone run in Oracle test below?\n\nI could, but how do I make Oracle use read committed mode?\n\n> \n> 1. CREATE TABLE test (x integer, y integer)\n> 2. INSERT INTO test VALUES (1, 1);\n> INSERT INTO test VALUES (1, 2);\n> INSERT INTO test VALUES (3, 2);\n> 3. run two session T1 and T2 (in read committed mode)\n> 4. in session T2 run\n> UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n> 5. in session T1 run\n> UPDATE test SET y = 3 WHERE x = 1;\n> 6. in session T2 run\n> COMMIT;\n> 7. in session T1 run\n> SELECT * FROM test; -- results?\n> 8. in session T1 run\n> COMMIT;\n> 9. now in session T2 run\n> UPDATE test SET x = 2;\n> 10. in session T1 run\n> UPDATE test SET y = 4 WHERE x = 1;\n> 11. in session T2 run\n> COMMIT;\n> 12. in session T1 run\n> SELECT * FROM test; -- results?\n> \n> TIA,\n> Vadim\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Mon, 27 Jul 1998 16:45:33 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "Dr. Michael Meskes wrote:\n> \n> On Tue, Jul 14, 1998 at 07:14:10PM +0800, Vadim Mikheev wrote:\n> > Could someone run in Oracle test below?\n> \n> I could, but how do I make Oracle use read committed mode?\n\n\"...You can set the isolation level of a transaction by using \none of these commands at the beginning of a transaction:\n\nSET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n\nSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n\nSET TRANSACTION ISOLATION LEVEL READ ONLY;\n...\"\n\nTIA,\n\tVadim\n",
"msg_date": "Tue, 28 Jul 1998 04:31:38 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "On Tue, Jul 14, 1998 at 07:14:10PM +0800, Vadim Mikheev wrote:\n> Could someone run in Oracle test below?\n> 1. CREATE TABLE test (x integer, y integer)\n> 2. INSERT INTO test VALUES (1, 1);\n> INSERT INTO test VALUES (1, 2);\n> INSERT INTO test VALUES (3, 2);\n> 3. run two session T1 and T2 (in read committed mode)\n> 4. in session T2 run\n> UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n> 5. in session T1 run\n> UPDATE test SET y = 3 WHERE x = 1;\n\nBlocked until 6 is executed.\n\n> 6. in session T2 run\n> COMMIT;\n> 7. in session T1 run\n> SELECT * FROM test; -- results?\n\n X Y\n---------- ----------\n 1 3\n 1 3\n 1 2\n\n> 8. in session T1 run\n> COMMIT;\n> 9. now in session T2 run\n> UPDATE test SET x = 2;\n> 10. in session T1 run\n> UPDATE test SET y = 4 WHERE x = 1;\n\nBlocked again until after 11. Nothing is updated.\n\n> 11. in session T2 run\n> COMMIT;\n> 12. in session T1 run\n> SELECT * FROM test; -- results?\n\n X Y\n---------- ----------\n 2 3\n 2 3\n 2 2\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Tue, 28 Jul 1998 21:25:57 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "On Tue, Jul 28, 1998 at 04:31:38AM +0800, Vadim Mikheev wrote:\n> Dr. Michael Meskes wrote:\n> > \n> > On Tue, Jul 14, 1998 at 07:14:10PM +0800, Vadim Mikheev wrote:\n> > > Could someone run in Oracle test below?\n> > \n> > I could, but how do I make Oracle use read committed mode?\n> \n> \"...You can set the isolation level of a transaction by using \n> one of these commands at the beginning of a transaction:\n\nHmm, do I have to re-set it after a commit? I didn't do that though.\nShall I re-run?\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Tue, 28 Jul 1998 21:27:50 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "First, thanks Michael!\n\nIt's nice to see expected results but I still have some\nnew questions - please help!\n\n1. CREATE TABLE test (x integer, y integer)\n2. INSERT INTO test VALUES (1, 1);\n INSERT INTO test VALUES (1, 2);\n INSERT INTO test VALUES (3, 2);\n3. run two session T1 and T2 \n4. in session T2 run\n UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n5. in session T1 run\n SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n UPDATE test SET y = 3 WHERE x = 1;\n --\n -- 1st record will be changed by T2, qual for new record\n -- version will be OK, but T1 should be aborted (???)\n --\n6. in session T2 run\n COMMIT;\n7. in session T1 run\n ROLLBACK; -- just to be sure -:)\n8. now in session T2 run\n UPDATE test SET x = 2;\n9. in session T1 run\n SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n UPDATE test SET y = 4 WHERE x = 1 or x = 2;\n11. in session T2 run\n COMMIT;\n12. in session T1 run\n SELECT * FROM test; -- results?\n ^^^^^^^^^^^^^^^^^^\nI would like to be sure that T1 will update table...\n\nTIA,\n Vadim\n",
"msg_date": "Thu, 30 Jul 1998 16:40:13 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "On Thu, Jul 30, 1998 at 04:40:13PM +0800, Vadim Mikheev wrote:\n> 1. CREATE TABLE test (x integer, y integer)\n> 2. INSERT INTO test VALUES (1, 1);\n> INSERT INTO test VALUES (1, 2);\n> INSERT INTO test VALUES (3, 2);\n> 3. run two session T1 and T2 \n> 4. in session T2 run\n> UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n> 5. in session T1 run\n> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> UPDATE test SET y = 3 WHERE x = 1;\n\nUPDATE test SET y = 3 WHERE x = 1\n *\nERROR at line 1:\nORA-08177: can't serialize access for this transaction\n\n> --\n> -- 1st record will be changed by T2, qual for new record\n> -- version will be OK, but T1 should be aborted (???)\n> --\n> 6. in session T2 run\n> COMMIT;\n> 7. in session T1 run\n> ROLLBACK; -- just to be sure -:)\n> 8. now in session T2 run\n> UPDATE test SET x = 2;\n> 9. in session T1 run\n> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n> UPDATE test SET y = 4 WHERE x = 1 or x = 2;\n\nblocked\n\n> 11. in session T2 run\n> COMMIT;\n> 12. in session T1 run\n> SELECT * FROM test; -- results?\n> ^^^^^^^^^^^^^^^^^^\n> I would like to be sure that T1 will update table...\n\n X Y\n---------- ----------\n 2 4\n 2 4\n 2 4\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Thu, 30 Jul 1998 21:41:30 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
}
] |
[
{
"msg_contents": "\n>> The problem appears to be in the sorting of nulls, which is used by\n>> UNION ALL:\n>> test=> select null order by 1;\n>> ERROR: type id lookup of 0 failed\n>\n>Hmm. And I've got trouble with the following when I assigned the type\n>\"UNKNOWNOID\" to the null fields:\n\nI think this is ok. Since the first select has to define the datatype, I think a forced\ntypecasting (like NULL::varchar) is perfectly OK in the above case.\n\nAndreas\n\n\n",
"msg_date": "Tue, 14 Jul 1998 16:52:32 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Recent updates [to union]"
}
] |
[
{
"msg_contents": "If I create a class A and make subclasses A1 and A2, then\nI get useful explanations when I say \"EXPLAIN SELECT * FROM A1 ...\"\nbut not if I ask about a query on the inheritance tree A*.\n\ntgl=> EXPLAIN SELECT * FROM A* WHERE accountid = 3;\nNOTICE: QUERY PLAN:\n\nAppend (cost=0.00 size=0 width=0)\n\nEXPLAIN\n\nWith \"explain verbose\" it is possible to see that the APPEND plan\nhas substructure, but it's hard to see what's going on in that format.\nHow come plain \"explain\" doesn't show the subnodes of an APPEND plan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Jul 1998 13:48:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN doesn't explain operations on inheritance trees"
},
{
"msg_contents": "> If I create a class A and make subclasses A1 and A2, then\n> I get useful explanations when I say \"EXPLAIN SELECT * FROM A1 ...\"\n> but not if I ask about a query on the inheritance tree A*.\n> \n> tgl=> EXPLAIN SELECT * FROM A* WHERE accountid = 3;\n> NOTICE: QUERY PLAN:\n> \n> Append (cost=0.00 size=0 width=0)\n> \n> EXPLAIN\n> \n> With \"explain verbose\" it is possible to see that the APPEND plan\n> has substructure, but it's hard to see what's going on in that format.\n> How come plain \"explain\" doesn't show the subnodes of an APPEND plan?\n\nI have just applied a patch to fix this. Append is an unusual node,\nbecause instead of checking left/right plans, you have to loop through a\nlist of plans. EXPLAIN did not do this, nor did cost computations and\npprint().\n\nShould be better now. \n\nI renamed all the Append structure names because they were very\nconfusing to me. I had created some of them for UNION, but named the\nbadly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 11:10:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN doesn't explain operations on inheritance trees"
}
] |
[
{
"msg_contents": "In the process of upgrading from 6.1 to 6.3.2+patches at long last.\nJust bumped into an interesting\nproblem with sequences. With 6.1, you could use sequences with COPY\nFROM. This no longer\nseems to be true with 6.3.2+patches. INSERT and UPDATE still work fine\nbut when using COPY FROM\nall sequence fields are either 0 or NULL.\n\nCan live without but curious if it's a bug or feature? Maybe fixed in\n6.4? Wasn't COPY FROM changed\nat the time pg_shadow was added?\n\nHere's the affected schema:\n\nCREATE SEQUENCE history_seq;\n\nCREATE TABLE history (\n gid INTEGER,\n state TEXT,\n dtin DATETIME,\n dtout DATETIME,\n seqno INTEGER DEFAULT nextval('history_seq') NOT NULL\n);\n\nMikE\n\n\n",
"msg_date": "Tue, 14 Jul 1998 13:37:31 -0700",
"msg_from": "Mike Embry <[email protected]>",
"msg_from_op": true,
"msg_subject": "SEQUENCES and COPY FROM"
},
{
"msg_contents": "Mike Embry wrote:\n> \n> In the process of upgrading from 6.1 to 6.3.2+patches at long last.\n> Just bumped into an interesting\n> problem with sequences. With 6.1, you could use sequences with COPY\n> FROM. This no longer\n> seems to be true with 6.3.2+patches. INSERT and UPDATE still work fine\n> but when using COPY FROM\n> all sequence fields are either 0 or NULL.\n> \n> Can live without but curious if it's a bug or feature? Maybe fixed in\n> 6.4? Wasn't COPY FROM changed\n> at the time pg_shadow was added?\n\nFeature...\nDEFAULT is for INSERT only (when column was not specified at all)!\nUse triggers from contrib/spi/autoinc.* - triggers work\nfor everything :)\n\nVadim\n",
"msg_date": "Wed, 15 Jul 1998 10:29:31 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SEQUENCES and COPY FROM"
}
] |
[
{
"msg_contents": "I tried the following to find out whether a table has any records\nwith field1 < X (for a constant X):\n\ntgl=> SELECT EXISTS(SELECT * FROM table WHERE field1 < X);\nERROR: internal error: do not know how to transform targetlist\n\nIs this a bug? (I'm using development sources from yesterday.)\n\nAm I using EXISTS() incorrectly? The examples I've been able to find\nonly show it as a part of a WHERE clause.\n\nIf it did work, would it be any faster than a table scan? The code\nI was hoping to replace is like this:\n\tSELECT COUNT(field1) WHERE field1 < X;\n\t// test whether result > 0\nSince aggregates aren't optimized very well, this ends up reading\nmuch or all of the table, even if there is an index for field1.\nI was hoping EXISTS() might be smarter...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Jul 1998 17:31:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "> \n> I tried the following to find out whether a table has any records\n> with field1 < X (for a constant X):\n> \n> tgl=> SELECT EXISTS(SELECT * FROM table WHERE field1 < X);\n> ERROR: internal error: do not know how to transform targetlist\n> \n> Is this a bug? (I'm using development sources from yesterday.)\n> \n> Am I using EXISTS() incorrectly? The examples I've been able to find\n> only show it as a part of a WHERE clause.\n> \n> If it did work, would it be any faster than a table scan? The code\n> I was hoping to replace is like this:\n> \tSELECT COUNT(field1) WHERE field1 < X;\n> \t// test whether result > 0\n> Since aggregates aren't optimized very well, this ends up reading\n> much or all of the table, even if there is an index for field1.\n> I was hoping EXISTS() might be smarter...\n> \n> \t\t\tregards, tom lane\n> \n\nShould have given a syntax error probably. But you might try:\n\nselect 1 where exists (select...);\n\nShould be faster if and only if we are doing the existential query\noptimization trick (stop on the first qualifying row).\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Tue, 14 Jul 1998 15:50:19 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "David Gould wrote:\n> \n> >\n> > I tried the following to find out whether a table has any records\n> > with field1 < X (for a constant X):\n> >\n> > tgl=> SELECT EXISTS(SELECT * FROM table WHERE field1 < X);\n> > ERROR: internal error: do not know how to transform targetlist\n> \n> Should have given a syntax error probably. But you might try:\n> \n> select 1 where exists (select...);\n> \n> Should be faster if and only if we are doing the existential query\n> optimization trick (stop on the first qualifying row).\n\nWe do.\n\nVadim\n",
"msg_date": "Wed, 15 Jul 1998 10:41:09 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "We only support subqueries in the target list. May have that expaned\nfor 6.4.\n\n> I tried the following to find out whether a table has any records\n> with field1 < X (for a constant X):\n> \n> tgl=> SELECT EXISTS(SELECT * FROM table WHERE field1 < X);\n> ERROR: internal error: do not know how to transform targetlist\n> \n> Is this a bug? (I'm using development sources from yesterday.)\n> \n> Am I using EXISTS() incorrectly? The examples I've been able to find\n> only show it as a part of a WHERE clause.\n> \n> If it did work, would it be any faster than a table scan? The code\n> I was hoping to replace is like this:\n> \tSELECT COUNT(field1) WHERE field1 < X;\n> \t// test whether result > 0\n> Since aggregates aren't optimized very well, this ends up reading\n> much or all of the table, even if there is an index for field1.\n> I was hoping EXISTS() might be smarter...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 22:45:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> We only support subqueries in the target list. May have that expaned\n> for 6.4.\n\nNot sure that EXISTS is allowed in target list...\n\nVadim\n",
"msg_date": "Wed, 15 Jul 1998 11:15:09 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > We only support subqueries in the target list. May have that expaned\n> > for 6.4.\n> \n> Not sure that EXISTS is allowed in target list...\n> \n> Vadim\n> \n\nI meant to say we only support subqueries in the \"WHERE\" clause. We do\nNOT support subqueries in the target list. That may change in 6.4.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 14 Jul 1998 23:48:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"internal error\" triggered by EXISTS()"
}
] |
[
{
"msg_contents": "---\n\nDate: Tue, 14 Jul 1998 21:23:42 -0500 (CDT)\nFrom: Dirk Elmendorf <[email protected]>\nTo: [email protected]\nSubject: MaxQuery Plan\n\nI posed this question to Bruce Momjian earlier this week. He was unable to give me an answ\ner so I was hoping to find enlightenment here.\n\nI recently upgraded to 6.3.2 (after a number of fixes) on Red Hat 4.2 from 6.2/6.1 .\nA number of the views I have seem to violate the new reduced QueryPlan Max- This has rende\nred one of my databases un-usable. \n\nOut of frustration I replaced the occurance of 8192 with 16384 (8k vs 16k) I wasn't able t\no determine which parts handled just the standard database tuples and which parts handled \nthe query plans themselves. It compiled and stopped complaining about my views. \n\nMy question is:\n1. Does anyone know what the implecations of doubling the max tuple are ? (besides increas\ning the amount of memory used by postgres)\n2. Is there another more efficient way to achieve the same ends?\n\nBruce told me that this problem would be fixed in 6.4 , but that's months away-I waited fo\nr a long time for 6.3 to stablize (Even though I had some problems still - which Bruce tha\nnkfully resolved) I need a number of the bug fixes and features of 6.3 - so staying put w\nith 6.2/6.1 isn't very tenable either. Any insight would greatly appreciated.\n\n\n_________________________________________________________\nDirk Elmendorf, VP/Development Main: 210-892-4000\nCymitar Technology Group, Inc. Direct: 210-892-4005\nLorene Office Plaza Fax: 210-892-4329 \n9828 Lorene Lane <http://www.cymitar.com>\nSan Antonio, TX 78216-4450 <[email protected]>\n_________________________________________________________\n\n\n",
"msg_date": "Tue, 14 Jul 1998 22:02:24 -0500 (CDT)",
"msg_from": "Dirk Elmendorf <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: MaxQuery Plan"
}
] |
[
{
"msg_contents": "\nin some sort of freak accident, we've ended up with a duplicated\nrecord. all info, including the oid was duplicated.\n\nit was during an update, two people ran a command at the same time.\n\nuser 1: BEGIN\nuser 1: NOTIFY\nuser 1: UPDATE\nuser 2: BEGIN\nuser 2: NOTIFY\nuser 1: END\nuser 2: UPDATE\nuser 2: END\n\nsame command, so the queries are the same. the record duplicated was\nthe one being updated. i'll try to reproduce it.\n\nalso, I had a unique index on the table, but that didn't seem to make\nany difference.\n\nany ideas on how to delete one without deleting both?\n",
"msg_date": "Tue, 14 Jul 1998 20:49:35 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "two records with same oid, freak accident?"
},
{
"msg_contents": "> \n> \n> in some sort of freak accident, we've ended up with a duplicated\n> record. all info, including the oid was duplicated.\n> \n> it was during an update, two people ran a command at the same time.\n> \n> user 1: BEGIN\n> user 1: NOTIFY\n> user 1: UPDATE\n> user 2: BEGIN\n> user 2: NOTIFY\n> user 1: END\n> user 2: UPDATE\n> user 2: END\n> \n> same command, so the queries are the same. the record duplicated was\n> the one being updated. i'll try to reproduce it.\n> \n> also, I had a unique index on the table, but that didn't seem to make\n> any difference.\n> \n> any ideas on how to delete one without deleting both?\n> \n\nI have the same problem with pg_listeners. Sometimes I find duplicate records\nwith same oid in the table inserted by concurrent transactions. I suspect \nthat the problem is caused by the notify but I'm not sure. Could you post\nsome test commnds to reproduce the problem ?\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Sat, 18 Jul 1998 17:32:32 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] two records with same oid, freak accident?"
}
] |
[
{
"msg_contents": "\nstrangeness!\n\nit was the cocurrence of the command which was messing it up. i ran\nthe command several times in a row, each one backgrounded. sure\nenough, after a while I got an error \"Update: cannot insert duplicate\nkey into unique index\". I don't believe this error was present\nbefore, it barfs only after the record has been duplicated and *then*\nupdated, because then the primary key is checked for uniqueness.\n\nthus, the record is un-updatable.\nI plan to get rid of it using the following method:\n\n1) select the offending records into temp table\n2) delete the offending records from original table\n3) copy the temp table to stdout, then copy one of those records back in.\n4) drop temp table\n\nvery strange.\n",
"msg_date": "Tue, 14 Jul 1998 20:57:55 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "more on phantom record"
}
] |
[
{
"msg_contents": ">> \n>> We only support subqueries in the target list. May have that expaned\n>> for 6.4.\n>\n>Not sure that EXISTS is allowed in target list...\n\nThe standard does not allow it, but it might be a nifty feature if it returned a boolean true or false.\n\nAndreas\n\n\n\n",
"msg_date": "Wed, 15 Jul 1998 18:39:09 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] \"internal error\" triggered by EXISTS()"
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n> \n> >>\n> >> We only support subqueries in the target list. May have that expaned\n> >> for 6.4.\n> >\n> >Not sure that EXISTS is allowed in target list...\n> \n> The standard does not allow it, but it might be a nifty feature if \n> it returned a boolean true or false.\n\nI don't foresee problems with this.\nBTW, shouldn't we allow the same for IN, ANY and ALL?\n\nselect ..., x in (...), ...\n\nVadim\n",
"msg_date": "Thu, 16 Jul 1998 01:53:41 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] \"internal error\" triggered by EXISTS()"
}
] |
[
{
"msg_contents": "The currently-checked-in sources are not working very well for me.\nIn particular, I can't restore from my pg_dump because COPY IN\ncoredumps; and a little experimentation shows that COPY OUT does too.\n\nIs anyone else seeing this?\n\nThe immediate cause of the dumps seems to be trying to free() a pointer\nvalue 0x1. I suspect it's got something to do with the two-to-four-byte\natttypmod changeover, but can't prove that.\n\ntemplate1=> \\copy pg_shadow to -\nQUERY: COPY pg_shadow TO stdout\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible.\n Terminating.\n\nBacktrace in backend's core file:\n\n#0 0x800bb108 in free ()\n#1 0x11cc90 in AllocSetFree (set=0x4007da98,\n pointer=0x1 <Address 0x1 out of bounds>) at aset.c:223\n#2 0x11d3b4 in PortalHeapMemoryFree (this=0x4007da98,\n pointer=0x4007daa8 \"postgres\") at portalmem.c:283\n#3 0x11cfa4 in MemoryContextFree (context=0x4007da98,\n pointer=0x1 <Address 0x1 out of bounds>) at mcxt.c:251\n#4 0x11d224 in pfree (pointer=0x1) at palloc.c:79\n#5 0x7ac60 in CopyTo (rel=0x40074380, binary=0 '\\000', oids=0 '\\000',\n fp=0x40001048, delim=0x3f500 \"\\t\") at copy.c:284\n#6 0x7a81c in DoCopy (relname=0x40001048 \"\", binary=0 '\\000', oids=0 '\\000',\n from=0 '\\000', pipe=1, filename=0x0, delim=0x3f500 \"\\t\") at copy.c:181\n#7 0xec768 in ProcessUtility (parsetree=0x4007d800, dest=Remote)\n at utility.c:759\n#8 0xea420 in pg_exec_query_dest (query_string=0x4007da98 \"\", dest=Remote)\n at postgres.c:706\n#9 0xea298 in pg_exec_query (query_string=0x4007da98 \"\") at postgres.c:602\n#10 0xeb26c in PostgresMain (argc=1073930048, argv=0x40007f40, real_argc=4,\n real_argv=0x4003b740) at postgres.c:1429\n#11 0xd0b70 in DoBackend (port=0x4003f9c0) at postmaster.c:1412\n#12 0xd0574 in BackendStartup (port=0x4003f9c0) at postmaster.c:1191\n#13 0xcfb68 in ServerLoop () at postmaster.c:725\n#14 0xcf66c in PostmasterMain (argc=1074256536, argv=0x7b033378)\n at postmaster.c:534\n#15 0x9f070 in main (argc=4, argv=0x7b033378) at main.c:93\n\n\nMeanwhile the following snippet of pg_dump output\n\ncopy pg_shadow from stdin;\ntgl\t301\tt\tt\tf\tt\t\\N\t\\N\ntree\t211\tt\tt\tf\tt\t\\N\t\\N\n\\.\n\ncauses a coredump, from which we get\nPQendcopy: resetting connection\nand the following backtrace:\n\n#0 0x800bb098 in free ()\n#1 0x11cc90 in AllocSetFree (set=0x4007e0e8,\n pointer=0x1 <Address 0x1 out of bounds>) at aset.c:223\n#2 0x11d3b4 in PortalHeapMemoryFree (this=0x4007e0e8, pointer=0x4007e0f8 \"\")\n at portalmem.c:283\n#3 0x11cfa4 in MemoryContextFree (context=0x4007e0e8,\n pointer=0x1 <Address 0x1 out of bounds>) at mcxt.c:251\n#4 0x11d224 in pfree (pointer=0x1) at palloc.c:79\n#5 0x74658 in CatalogIndexFetchTuple (heapRelation=0x40046940, idesc=0x1,\n skey=0xffffffff) at indexing.c:245\n#6 0x74ab4 in TypeOidIndexScan (heapRelation=0x40046940, typeId=1)\n at indexing.c:492\n#7 0x114060 in SearchSysCache (cache=0x4005bdf8, v1=25, v2=0, v3=0, v4=0)\n at catcache.c:972\n#8 0x1177d8 in SearchSysCacheTuple (cacheId=13, key1=25, key2=0, key3=0,\n key4=0) at syscache.c:427\n#9 0x7b988 in GetInputFunction (type=25) at copy.c:849\n#10 0x7b11c in CopyFrom (rel=0x400747b0, binary=0 '\\000', oids=0 '\\000',\n fp=0x40001038, delim=0x3f500 \"\\t\") at copy.c:505\n#11 0x7a81c in DoCopy (relname=0x40001038 \"\", binary=0 '\\000', oids=0 '\\000',\n from=1 '\\001', pipe=1, filename=0x0, delim=0x3f500 \"\\t\") at copy.c:181\n#12 0xec768 in ProcessUtility (parsetree=0x4007dc38, dest=Remote)\n at utility.c:759\n#13 0xea420 in pg_exec_query_dest (query_string=0x4007e0e8 \"\", dest=Remote)\n at postgres.c:706\n#14 0xea298 in pg_exec_query (query_string=0x4007e0e8 \"\") at postgres.c:602\n#15 0xeb26c in PostgresMain (argc=1073930048, argv=0x40007f40, real_argc=4,\n real_argv=0x4003b740) at postgres.c:1429\n#16 0xd0b70 in DoBackend (port=0x4003fde0) at postmaster.c:1412\n#17 0xd0574 in BackendStartup (port=0x4003fde0) at postmaster.c:1191\n#18 0xcfb68 in ServerLoop () at postmaster.c:725\n#19 0xcf66c in PostmasterMain (argc=1074258152, argv=0x7b033378)\n at postmaster.c:534\n#20 0x9f070 in main (argc=4, argv=0x7b033378) at main.c:93\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Jul 1998 14:41:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current sources dump core on COPY IN/OUT"
},
{
"msg_contents": "> The currently-checked-in sources are not working very well for me.\n> In particular, I can't restore from my pg_dump because COPY IN\n> coredumps; and a little experimentation shows that COPY OUT does too.\n> \n> Is anyone else seeing this?\n> \n> The immediate cause of the dumps seems to be trying to free() a pointer\n> value 0x1. I suspect it's got something to do with the two-to-four-byte\n> atttypmod changeover, but can't prove that.\n> \n> template1=> \\copy pg_shadow to -\n> QUERY: COPY pg_shadow TO stdout\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or while processing the request.\n> We have lost the connection to the backend, so further processing is impossible.\n> Terminating.\n\nThanks. Should be fixed now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 15 Jul 1998 14:52:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Current sources dump core on COPY IN/OUT"
}
] |
[
{
"msg_contents": "Ok, I'm not sure that LLL will appear in 6.4 but it's good time to \ndiscuss about it. \n \nFirst, PostgreSQL is multi-version system due to its \nnon-overwriting storage manager. And so, first proposal is use \nthis feature (multi-versioning) in LLL implementation. \n \nIn multi-version systems access methods don't use locks to read \nconsistent data and so readers don't block writers, writers don't \nblock readers and only the same-row writers block writers. In such \nsystems access methods returns snapshot of data as they were in \n_some_ point in time. For read committed isolation level this \nmoment is the time when statement began. For serialized isolation \nlevel this is the time when current transaction began. \n \nOracle uses rollback segments to reconstract blocks that were \nchanged after statement/transaction began and so statement sees \nonly data that were committed by then. \n \nIn our case we have to analyze tuple xmin/xmax to determine _when_ \ncorresponding transaction was committed in regard to the last \ntransaction (LCX) that was committed when statement/transaction\nbegan. \n \nIf xmin/xmax was committed before LCX then tuple \ninsertion/deletion is visible to statement, else - not visible. \n \nTo achieve this, the second proposal is to use special SCN - \nSystem Change Number (C) Oracle :) - that will be incremented by 1 \nby each transaction commit. Each commited transaction will have \ncorresponding SCN (4 bytes -> equal to sizeof XID). \n \nWe have to keep XID --> SCN mapping as long as there is running \ntransaction that is \"interested\" in XID: when transaction begins \nit will determine the first (the oldest) running transaction XID \nand this will be the minimum XID whose SCN transaction would like \nto know. \n \nAccess methods will have to determine SCN for xmin/xmax only if \nFRX <= xmin/xmax <= LSX, where FRX is XID of first (oldest) \nrunning transactions and LSX is last started transaction - in the \nmoment when statement (for read committed) or transaction (for \nserialized) began. For such xmin/xmax their SCNs will be compared\nwith SCN determined in the moment of statement/transaction \nbegin... \n \nChanges made by xmin/xmax < FRX are visible to \nstatement/transaction, and changes made by xmin/xmax > LSX are not \nvisible. Without xmin/xmax SCN lookup. \n \nFor XID --> SCN mapping I propose to use the simplest schema: \nordered queue of SCNs (or something like this) - i.e. keep SCNs \nfor all transactions from the first one whose SCN could be \nrequired by some running transaction to the last started. \n \nThis queue must be shared! \n \nThe size of this queue and average number of commits/aborts per \nsecond will define how long transactions will be able to run. 30 \nxacts/sec and 400K of queue will enable 30 - 60 minuts running \ntransactions... \n \nKeeping queue in shared memmory may be unacceptable in some \ncases... mmap or shared buffer pool could be used to access queue.\nWe'll see... \n \nAlso note that Oracle has special READ ONLY transactions mode. \nREAD ONLY transactions are disallowed to change anything in the \ndatabase. This is good mode for pg_dump (etc) long running \napplications. Because of no one will be \"interested\" in SCN of \nREAD ONLY transactions - such transactions can make private copy \nof the queue part and after this queue could be truncated... \n \nHaving 4 bytes per SCN enable to use special values to mark \ncorresponding transaction as running or aborted and avoid pg_log \nlookup when we need in both SCN and state of transaction. \n \n...Well, it's time to sleep :) \n \nTo be continued... \n \nComments ? \n \nVadim\n",
"msg_date": "Thu, 16 Jul 1998 07:46:25 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "proposals for LLL, part 1"
},
{
"msg_contents": "I am retaining your entire message here for reference.\n\nI have a good solution for this. It will require only 4k of shared\nmemory, and will have no restrictions on the age or number of\ntransactions.\n\nFirst, I think we only want to implement \"read committed isolation\nlevel\", not serialized. Not sure why someone would want serialized.\n\nOK, when a backend is looking at a row that has been committed, it must\ndecide if the row was committed before or after my transaction started.\nIf the transaction commit id(xmin) is greater than our current xid, we\nknow we should not look at it because it is for a transaction that\nstarted after our own transaction.\n\nThe problem is for transactions started before our own (have xmin's less\nthan our own), and may have committed before or after our transaction.\n\nHere is my idea. We add a field to the shared memory Proc structure\nthat can contain up to 32 transaction ids. When a transaction starts,\nwe spin though all other open Proc structures, and record all\ncurrently-running transaction ids in our own Proc field used to store up\nto 32 transaction ids. While we do this, we remember the lowest of\nthese open transaction ids.\n\nThis is our snapshot of current transactions at the time our transaction\nstarts. While analyzing a row, if it is greater than our transaction\nid, then the transaction was not even started before our transaction. \nIf the xmin is lower than the min transaction id that we remembered from\nthe Proc structures, it was committed before our transaction started. \nIf it is greater than or equal to the min remembered transaction id, we\nmust spin through our stored transaction ids. If it is in the stored\nlist, we don't look at the row, because that transaction was not\ncommitted when we started our transaction. If it is not in the list, it\nmust have been committed before our transaction started. We know this\nbecause if any backend starting a transaction after ours would get a\ntransaction id higher than ours.\n\nComments?\n\n> Ok, I'm not sure that LLL will appear in 6.4 but it's good time to \n> discuss about it. \n> \n> First, PostgreSQL is multi-version system due to its \n> non-overwriting storage manager. And so, first proposal is use \n> this feature (multi-versioning) in LLL implementation. \n> \n> In multi-version systems access methods don't use locks to read \n> consistent data and so readers don't block writers, writers don't \n> block readers and only the same-row writers block writers. In such \n> systems access methods returns snapshot of data as they were in \n> _some_ point in time. For read committed isolation level this \n> moment is the time when statement began. For serialized isolation \n> level this is the time when current transaction began. \n> \n> Oracle uses rollback segments to reconstract blocks that were \n> changed after statement/transaction began and so statement sees \n> only data that were committed by then. \n> \n> In our case we have to analyze tuple xmin/xmax to determine _when_ \n> corresponding transaction was committed in regard to the last \n> transaction (LCX) that was committed when statement/transaction\n> began. \n> \n> If xmin/xmax was committed before LCX then tuple \n> insertion/deletion is visible to statement, else - not visible. \n> \n> To achieve this, the second proposal is to use special SCN - \n> System Change Number (C) Oracle :) - that will be incremented by 1 \n> by each transaction commit. Each commited transaction will have \n> corresponding SCN (4 bytes -> equal to sizeof XID). \n> \n> We have to keep XID --> SCN mapping as long as there is running \n> transaction that is \"interested\" in XID: when transaction begins \n> it will determine the first (the oldest) running transaction XID \n> and this will be the minimum XID whose SCN transaction would like \n> to know. \n> \n> Access methods will have to determine SCN for xmin/xmax only if \n> FRX <= xmin/xmax <= LSX, where FRX is XID of first (oldest) \n> running transactions and LSX is last started transaction - in the \n> moment when statement (for read committed) or transaction (for \n> serialized) began. For such xmin/xmax their SCNs will be compared\n> with SCN determined in the moment of statement/transaction \n> begin... \n> \n> Changes made by xmin/xmax < FRX are visible to \n> statement/transaction, and changes made by xmin/xmax > LSX are not \n> visible. Without xmin/xmax SCN lookup. \n> \n> For XID --> SCN mapping I propose to use the simplest schema: \n> ordered queue of SCNs (or something like this) - i.e. keep SCNs \n> for all transactions from the first one whose SCN could be \n> required by some running transaction to the last started. \n> \n> This queue must be shared! \n> \n> The size of this queue and average number of commits/aborts per \n> second will define how long transactions will be able to run. 30 \n> xacts/sec and 400K of queue will enable 30 - 60 minuts running \n> transactions... \n> \n> Keeping queue in shared memmory may be unacceptable in some \n> cases... mmap or shared buffer pool could be used to access queue.\n> We'll see... \n> \n> Also note that Oracle has special READ ONLY transactions mode. \n> READ ONLY transactions are disallowed to change anything in the \n> database. This is good mode for pg_dump (etc) long running \n> applications. Because of no one will be \"interested\" in SCN of \n> READ ONLY transactions - such transactions can make private copy \n> of the queue part and after this queue could be truncated... \n> \n> Having 4 bytes per SCN enable to use special values to mark \n> corresponding transaction as running or aborted and avoid pg_log \n> lookup when we need in both SCN and state of transaction. \n> \n> ...Well, it's time to sleep :) \n> \n> To be continued... \n> \n> Comments ? \n> \n> Vadim\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 16 Jul 1998 12:14:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I am retaining your entire message here for reference.\n> \n> I have a good solution for this. It will require only 4k of shared\n> memory, and will have no restrictions on the age or number of\n> transactions.\n> \n> First, I think we only want to implement \"read committed isolation\n> level\", not serialized. Not sure why someone would want serialized.\n\nSerialized is DEFAULT isolation level in standards.\nIt must be implemented. Would you like inconsistent results\nfrom pg_dump, etc?\n\n> \n> OK, when a backend is looking at a row that has been committed, it must\n> decide if the row was committed before or after my transaction started.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> If the transaction commit id(xmin) is greater than our current xid, we\n> know we should not look at it because it is for a transaction that\n> started after our own transaction.\n\nIt's right for serialized, not for read committed.\nIn read committed backend must decide if the row was committed\nbefore or after STATEMENT started...\n\n> \n> The problem is for transactions started before our own (have xmin's less\n> than our own), and may have committed before or after our transaction.\n> \n> Here is my idea. We add a field to the shared memory Proc structure\n> that can contain up to 32 transaction ids. When a transaction starts,\n> we spin though all other open Proc structures, and record all\n> currently-running transaction ids in our own Proc field used to store up\n> to 32 transaction ids. While we do this, we remember the lowest of\n> these open transaction ids.\n> \n> This is our snapshot of current transactions at the time our transaction\n> starts. While analyzing a row, if it is greater than our transaction\n> id, then the transaction was not even started before our transaction.\n> If the xmin is lower than the min transaction id that we remembered from\n> the Proc structures, it was committed before our transaction started.\n> If it is greater than or equal to the min remembered transaction id, we\n> must spin through our stored transaction ids. If it is in the stored\n> list, we don't look at the row, because that transaction was not\n> committed when we started our transaction. If it is not in the list, it\n> must have been committed before our transaction started. We know this\n> because if any backend starting a transaction after ours would get a\n> transaction id higher than ours.\n\nYes, this is way.\nBut, first, why should we store running transaction xids in shmem ?\nWho is interested in these xids?\nWe have to store in shmem only min of these xids: vacuum must\nnot delete rows deleted by transactions with xid greater \n(or equal) than this min xid or we risk to get inconsistent \nresults...\nAlso, as you see, we have to lock Proc structures in shmem\nto get list of xids for each statement in read committed \nmode...\n\nI don't know what way is better but using list of xids\nis much easy to implement...\n\nVadim\n",
"msg_date": "Fri, 17 Jul 1998 12:19:34 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I am retaining your entire message here for reference.\n> > \n> > I have a good solution for this. It will require only 4k of shared\n> > memory, and will have no restrictions on the age or number of\n> > transactions.\n> > \n> > First, I think we only want to implement \"read committed isolation\n> > level\", not serialized. Not sure why someone would want serialized.\n> \n> Serialized is DEFAULT isolation level in standards.\n> It must be implemented. Would you like inconsistent results\n> from pg_dump, etc?\n\nOK, I didn't know that.\n\n> \n> > \n> > OK, when a backend is looking at a row that has been committed, it must\n> > decide if the row was committed before or after my transaction started.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > If the transaction commit id(xmin) is greater than our current xid, we\n> > know we should not look at it because it is for a transaction that\n> > started after our own transaction.\n> \n> It's right for serialized, not for read committed.\n> In read committed backend must decide if the row was committed\n> before or after STATEMENT started...\n\n\nOK.\n\n> \n> > \n> > The problem is for transactions started before our own (have xmin's less\n> > than our own), and may have committed before or after our transaction.\n> > \n> > Here is my idea. We add a field to the shared memory Proc structure\n> > that can contain up to 32 transaction ids. When a transaction starts,\n> > we spin though all other open Proc structures, and record all\n> > currently-running transaction ids in our own Proc field used to store up\n> > to 32 transaction ids. While we do this, we remember the lowest of\n> > these open transaction ids.\n> > \n> > This is our snapshot of current transactions at the time our transaction\n> > starts. While analyzing a row, if it is greater than our transaction\n> > id, then the transaction was not even started before our transaction.\n> > If the xmin is lower than the min transaction id that we remembered from\n> > the Proc structures, it was committed before our transaction started.\n> > If it is greater than or equal to the min remembered transaction id, we\n> > must spin through our stored transaction ids. If it is in the stored\n> > list, we don't look at the row, because that transaction was not\n> > committed when we started our transaction. If it is not in the list, it\n> > must have been committed before our transaction started. We know this\n> > because if any backend starting a transaction after ours would get a\n> > transaction id higher than ours.\n> \n> Yes, this is way.\n> But, first, why should we store running transaction xids in shmem ?\n> Who is interested in these xids?\n\n> We have to store in shmem only min of these xids: vacuum must\n> not delete rows deleted by transactions with xid greater \n> (or equal) than this min xid or we risk to get inconsistent \n> results...\n\n> Also, as you see, we have to lock Proc structures in shmem\n> to get list of xids for each statement in read committed \n> mode...\n\nYou are correct. We need to lock Proc stuctures during our scan, but we\ndon't need to keep the list in shared memory. No reason to do it. Do\nwe have to keep the Proc's locked while we get our table data locks. I\nsure hope not. Not sure how we are going prevent someone from\ncommitting their transaction between our Proc scan and when we start our\ntransaction. Not even sure if I should be worried about that.\n\n\n> I don't know what way is better but using list of xids\n> is much easy to implement...\n\nSure, do a list. Getting the min allows you to reduce the number of\ntimes it has to be scanned. I had not thought about vacuum, but keeping\nthe min in shared memory will certain fix that issue.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 17 Jul 1998 00:53:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> You are correct. We need to lock Proc stuctures during our scan, but we\n> don't need to keep the list in shared memory. No reason to do it. Do\n> we have to keep the Proc's locked while we get our table data locks. I\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNo! Only while we are scanning Procs...\n\n> sure hope not. Not sure how we are going prevent someone from\n> committing their transaction between our Proc scan and when we start our\n> transaction. Not even sure if I should be worried about that.\n\nWe shouldn't... It doesn't matter.\n\nVadim\n",
"msg_date": "Fri, 17 Jul 1998 12:58:41 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > You are correct. We need to lock Proc stuctures during our scan, but we\n> > don't need to keep the list in shared memory. No reason to do it. Do\n> > we have to keep the Proc's locked while we get our table data locks. I\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> No! Only while we are scanning Procs...\n> \n> > sure hope not. Not sure how we are going prevent someone from\n> > committing their transaction between our Proc scan and when we start our\n> > transaction. Not even sure if I should be worried about that.\n> \n> We shouldn't... It doesn't matter.\n\nOne more item. If we don't lock Proc between the scan and our\naquisition of a transaction id, it is possible some other backend will\nget a transaction id between the time we scan the Proc structure and\nwhen we get our transaction id, causing us to look at rows that are part\nof a non-committed transaction. I think we have to get our transaction\nid first, before scanning Proc.\n\nThere is definately an area of vulnerabilty there. I am now wondering\nhow much we need to lock Proc during the scan.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 17 Jul 1998 14:36:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
}
] |
[
{
"msg_contents": "Hi all,\n\n I got strange result with PQgetisnull, say\n\nfor empty table(I think that it's empty so null)\n\nI guess that\nafter fetch some selection\nPQgetisnull(result, 0, 0) should give me \"1 or true\"\nBut\nit gives me\nERROR! field 0(of 0) of row 0(of 0) is not available... Segmentation\nFault\n\nso I tried\nif (PQntuples(result) == 0)\n...\n\nBut PQntules gives me \"1\" not \"0\" !\n\ndo I have to check result with\nchar* value = PQgetvalue(result, 0, 0);\nand test value is \"\" or not? any idea???\n\nI'm using v6.3.2 on linux && Solaris.\n\nBest Regards, C.S.Park\n\n",
"msg_date": "Thu, 16 Jul 1998 17:10:43 +0900",
"msg_from": "\"Park, Chul-Su\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG?] strange PQgetisnull"
},
{
"msg_contents": "\"Park, Chul-Su\" <[email protected]> writes:\n> I got strange result with PQgetisnull, say\n> for empty table(I think that it's empty so null)\n> I guess that\n> after fetch some selection\n> PQgetisnull(result, 0, 0) should give me \"1 or true\"\n> But\n> it gives me\n> ERROR! field 0(of 0) of row 0(of 0) is not available... Segmentation\n> Fault\n\nPQgetisnull is buggy in 6.3.2: it range-checks the tuple and field\nnumbers, and complains if they are out of range ... but then falls\nthrough and tries to reference the tuple info anyway. Thus, segfault.\n\nIt should return a default value (probably 1 to pretend the field is\nNULL) when the indexes are out of range. This is already fixed in the\ncurrent development sources, but if you want to stick with a 6.3.2\nserver then you will have to modify fe-exec.c yourself.\n\n> so I tried\n> if (PQntuples(result) == 0)\n> ...\n> But PQntules gives me \"1\" not \"0\" !\n\nThe PQgetisnull error message you quoted above indicates (after looking\nat the 6.3.2 sources) that nfields was 1 and ntuples was 0. I think you\nare testing the results of a different query here.\n\n> do I have to check result with\n> char* value = PQgetvalue(result, 0, 0);\n> and test value is \"\" or not? any idea???\n\nNo, you should be checking PQntuples and perhaps also PQnfields to be\nsure that the indexes you are going to use are OK.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Jul 1998 10:11:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [BUG?] strange PQgetisnull "
},
{
"msg_contents": "> Hi all,\n> \n> I got strange result with PQgetisnull, say\n> \n> for empty table(I think that it's empty so null)\n> \n> I guess that\n> after fetch some selection\n> PQgetisnull(result, 0, 0) should give me \"1 or true\"\n> But\n> it gives me\n> ERROR! field 0(of 0) of row 0(of 0) is not available... Segmentation\n> Fault\n\nYou can't check for isnull on a Result that returns no rows. It is only\nfor looking at fields of an existing returned row.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 16 Jul 1998 11:27:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [BUG?] strange PQgetisnull"
}
] |
[
{
"msg_contents": "It looks to me like pgsql-announce has found its way onto the spammers'\ntarget lists. Since the announce list was switched to unmoderated\nstatus at the beginning of May, my mail logs show its traffic as\n\n\t\tReal messages\t\tSpam\n\nMay\t\t3\t\t\t0\nJune\t\t9\t\t\t2\nJuly\t\t1\t\t\t4\n\n... and July's only half over.\n\nMay I suggest that pgsql-announce should go back to moderated status?\nOtherwise I foresee being forced to unsubscribe from it soon.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Jul 1998 10:40:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "How about re-moderating pgsql-announce?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> It looks to me like pgsql-announce has found its way onto the spammers'\n> target lists. Since the announce list was switched to unmoderated\n> status at the beginning of May, my mail logs show its traffic as\n> \n> \t\tReal messages\t\tSpam\n> \n> May\t\t3\t\t\t0\n> June\t\t9\t\t\t2\n> July\t\t1\t\t\t4\n> \n> ... and July's only half over.\n> \n> May I suggest that pgsql-announce should go back to moderated status?\n> Otherwise I foresee being forced to unsubscribe from it soon.\n\nAlternatively, could we just bounce any mail from non-subscribers? We\ncould say something like \"Due to the increasing amount of unsolicited\nemail on this list we require that posters are subscribed to the list.\nIn order to subscribe ... Then repost your message and it will be\naccepted.\" I doubt any spammers would go to the trouble of\nsubscribing to the list.\n\nOcie\n\n",
"msg_date": "Thu, 16 Jul 1998 11:10:18 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How about re-moderating pgsql-announce?"
},
{
"msg_contents": "[email protected] writes:\n> Alternatively, could we just bounce any mail from non-subscribers? We\n> could say something like \"Due to the increasing amount of unsolicited\n> email on this list we require that posters are subscribed to the list.\n> In order to subscribe ... Then repost your message and it will be\n> accepted.\" I doubt any spammers would go to the trouble of\n> subscribing to the list.\n\nEven better: accept messages from known subscribers and post them at\nonce. Messages from non-subscribers are silently forwarded to the\nlist admin, who either approves them for posting or /dev/nulls them.\nThis should keep the spam out but save the list admin from having to\nhand-process most of the routine traffic.\n\n(I *think* the above is fairly easy to do with recent majordomo\nreleases, but I don't know what hub.org is running. Simply switching\npgsql-announce back to moderated status would certainly be easy, which\nis why I suggested it to start with.)\n\nI don't like the idea of auto-bouncing back to the message originator,\nfor two reasons:\n* There have been reported instances of spammers subscribing to lists\n for just long enough to send their spam. I think we are best off\n not advertising that there is any filtering mechanism in place.\n* The return address of a spam is generally fraudulent. It often points\n to an innocent victim who has incurred the spammer's displeasure (by\n complaining about a previous spam). That person gets to bear the\n brunt of all the delivery failure error messages, poorly targeted\n complaints, etc from the spam. We shouldn't contribute to this form\n of mailbombing by auto-bouncing messages.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Jul 1998 10:25:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] How about re-moderating pgsql-announce? "
},
{
"msg_contents": "On Fri, 17 Jul 1998, Tom Lane wrote:\n\n> (I *think* the above is fairly easy to do with recent majordomo\n> releases, but I don't know what hub.org is running. Simply switching\n> pgsql-announce back to moderated status would certainly be easy, which\n> is why I suggested it to start with.)\n\n\tI'm running a *fairly* recent version of Majordomo here...let me\nknow what you feel should be set, and to what, and I'll make the change...\n\n\n",
"msg_date": "Fri, 17 Jul 1998 10:38:36 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] How about re-moderating pgsql-announce? "
}
] |
[
{
"msg_contents": "Hi Hackers,\n\n When I record ~ 1000 records && blobs and try to remove, it takes\nforever!\ne.g.\n\ncreate table PNT (\n id int not null, -- database record number\n exp int not null, -- experiment number\n run int not null, -- run\n run_to int not null, -- run to(valid range)\n version int not null, -- version number\n datatype text default 'blob', -- data type\n created timestamp default current_timestamp, --\ncreation time\n modified timestamp default current_timestamp, --\nmodification time\n owner name default getpgusername(), -- owner\n loid oid default 0, -- reference to pnt bank\n\nconstraint PNT_con check(run>0 AND run<=run_to AND version>0)\n);\n\n... and deposit ~ 1000 blobs(large objects with size ~ 10k), it takes ~\n2 sec/record seems to be reonable.\nBut, deleting with\n\n result = PQexec(conn,\n \"DECLARE pntcur CURSOR FOR \"\n \"SELECT count(lo_unlink(int4(oid_text(loid)))) \"\n \"FROM PNT\"\n \";\");\n PQclear(result);\n result = PQexec(conn, \"FETCH 1 IN pntcur;\");\n\nOR\n\n sprintf(cmd,\n \"DECLARE pntcur CURSOR FOR \"\n \"SELECT count(lo_unlink(int4(oid_text(loid)))) \"\n \"FROM PNT WHERE exp = %d\"\n \";\"\n , exp);\n result = PQexec(conn, cmd);\n PQclear(result);\n result = PQexec(conn, \"FETCH 1 IN pntcur;\");\n\ntakes forever! \"destroydb\" also... something to do with inefficient\n\"inv-tree\"?\n\nis there any plan to recall \"simple unix file\" based blobs? I guess\nthat \"inv\" based blob seems to be\nreally inefficient... any comment?\n\nbest regards, cs\n\n\n",
"msg_date": "Fri, 17 Jul 1998 00:48:16 +0900",
"msg_from": "\"Park, Chul-Su\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[QUESTIONS] slow \"select lo_unlink(..) where ...;\" ?"
}
] |
[
{
"msg_contents": "I understand create table/create index etc. are not rollback-able\noperations. As a result, if creation of a table is interrupted in the\nmiddle of process, corresponding table file is left remain. This\nprevents creating a new table with the same name. The only workaround\nlooks removing the file by hand. Is this safe, or potentially has some\ndangers?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Fri, 17 Jul 1998 10:37:24 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "aborting create table in the middle of process"
}
] |
[
{
"msg_contents": "Stan Brown wrote:\n> \n> >\n> >First, PostgreSQL is multi-version system due to its\n> >non-overwriting storage manager. And so, first proposal is use\n> >this feature (multi-versioning) in LLL implementation.\n> >\n> \n> I must ask one basic question here. Since we dleted tme travel, and the\n> non-overwriting storage manager is no longer required, should we at\n> least discuss changing that, either as a part of the LLC work, or prior\n> to it.\n> \n> I think one of the primary reasons to do so would be to eliminate\n> vacumm. Would this not be better for a system that needs to be up\n> 24x7x365?\n\nYes, this is very important question...\n\nIn original postgres there was dedicated vacuum process...\nVacuuming without human administration is possible but\nin any case commit in non-overwriting system requires\n~2 data block writes (first - to write changes, second - to\nwrite updated xmin/xmax statuses). In WAL systems only\n1 data block write required...\n\nOk, we have to decide two issues about what would we like\nto use in future:\n\n1. type of storage manager/transaction system - \n\n WAL or non-overwriting.\n\n2. type of concurrency/consistency control -\n \n Locking or multi-versions.\n\nThese are quite different issues!\n\nOracle is WAL and multi-version system!\n\nWe could implement multi-version control now and switch\nto WAL latter...\n\nIf we decide that locking is ok for concurrency/consistency\nthen it's better to switch to WAL before implementing LLL.\n\nI personally very like multi-versions...\n\nComments/votes..?\n\nVadim\nP.S. I'll be off-line up to the monday...\n",
"msg_date": "Fri, 17 Jul 1998 13:00:10 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "> Yes, this is very important question...\n> \n> In original postgres there was dedicated vacuum process...\n> Vacuuming without human administration is possible but\n> in any case commit in non-overwriting system requires\n> ~2 data block writes (first - to write changes, second - to\n> write updated xmin/xmax statuses). In WAL systems only\n> 1 data block write required...\n> \n> Ok, we have to decide two issues about what would we like\n> to use in future:\n> \n> 1. type of storage manager/transaction system - \n> \n> WAL or non-overwriting.\n\nCan you explain WAL. I understand locking vs. multi-version.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 17 Jul 1998 01:15:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Ok, we have to decide two issues about what would we like\n> > to use in future:\n> >\n> > 1. type of storage manager/transaction system -\n> >\n> > WAL or non-overwriting.\n> \n> Can you explain WAL. I understand locking vs. multi-version.\n\nWrite Ahead Log - log of changes. \n\nVadim\n",
"msg_date": "Mon, 20 Jul 1998 11:26:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "> Yes, this is very important question...\n> \n> In original postgres there was dedicated vacuum process...\n> Vacuuming without human administration is possible but\n> in any case commit in non-overwriting system requires\n> ~2 data block writes (first - to write changes, second - to\n> write updated xmin/xmax statuses). In WAL systems only\n> 1 data block write required...\n\nDoesn't a WAL have do an update by writing the old row to a log, then\nwrite the changes to the real table? It is only inserts that have only\none write?\n\n> \n> Ok, we have to decide two issues about what would we like\n> to use in future:\n> \n> 1. type of storage manager/transaction system - \n> \n> WAL or non-overwriting.\n> \n> 2. type of concurrency/consistency control -\n> \n> Locking or multi-versions.\n\nIf we could just get superceeded row reuse without vacuum, we can stick\nwith non-overwriting, can't we?\n\n> \n> These are quite different issues!\n> \n> Oracle is WAL and multi-version system!\n> \n> We could implement multi-version control now and switch\n> to WAL latter...\n> \n> If we decide that locking is ok for concurrency/consistency\n> then it's better to switch to WAL before implementing LLL.\n> \n> I personally very like multi-versions...\n\nOK, now I have to ask what multi-version is.\n\nI have to read that Gray book. Did you get my e-mail on it?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:10:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Yes, this is very important question...\n> >\n> > In original postgres there was dedicated vacuum process...\n> > Vacuuming without human administration is possible but\n> > in any case commit in non-overwriting system requires\n> > ~2 data block writes (first - to write changes, second - to\n> > write updated xmin/xmax statuses). In WAL systems only\n> > 1 data block write required...\n> \n> Doesn't a WAL have do an update by writing the old row to a log, then\n> write the changes to the real table? It is only inserts that have only\n> one write?\n\nI was wrong: WAL systems need in 2 writes (data block and log block)\nand we need in 3 writes (data block, log block and data block\nwith updated xact statuses). But in commit time WAL have to\nfsync only log (Oracle places new data there and keep old\ndata in rollback segments) and we have to fsync 2 files.\n\n> \n> >\n> > Ok, we have to decide two issues about what would we like\n> > to use in future:\n> >\n> > 1. type of storage manager/transaction system -\n> >\n> > WAL or non-overwriting.\n> \n> If we could just get superceeded row reuse without vacuum, we can stick\n> with non-overwriting, can't we?\n\nI hope...\n\n> >\n> > I personally very like multi-versions...\n> \n> OK, now I have to ask what multi-version is.\n\nThis is our ability to return snapshot of data committed\nin some point in time. Note, that we are able to return\ndifferent snapshots to different backends, simultaneously.\n\n> \n> I have to read that Gray book. Did you get my e-mail on it?\n\nThanks and sorry... Yes, this could help me. Do you have my \nmail address ?\n\nVadim\n",
"msg_date": "Wed, 22 Jul 1998 00:33:22 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
},
{
"msg_contents": "On Fri, Jul 17, 1998 at 01:00:10PM +0800, Vadim Mikheev wrote:\n> In original postgres there was dedicated vacuum process...\n> Vacuuming without human administration is possible but\n> in any case commit in non-overwriting system requires\n> ~2 data block writes (first - to write changes, second - to\n> write updated xmin/xmax statuses). In WAL systems only\n> 1 data block write required...\n\nThen the arguments are clearly in favor of WAL.\n\n> Oracle is WAL and multi-version system!\n\nWhile Oracle has some faults (or more) I agree with that choice. \n\n> We could implement multi-version control now and switch\n> to WAL latter...\n\nOnce again I fully agree.\n\n> I personally very like multi-versions...\n\nMe too.\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Mon, 27 Jul 1998 16:54:07 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 1"
}
] |
[
{
"msg_contents": "\nHello,\n\n\tSorry to disturb the hacker list for a question that seems not so hard.\nBut I got no answers elsewhere...\n Is is definitively impossible to use C++ to define new SQL functions or operators\nin SQL ? (I need this in order to use a big geometric library, to produce e.g. polygon\nintersections, or harder).\n\n\t I manage to add C functions, with a\n\n\t create function ... returns .. as language 'c';\n\n but the linking style of c++ is rather different.\nDo you have an idea ?\n \nThanks,\n\nDavid\[email protected]\n",
"msg_date": "Fri, 17 Jul 1998 10:59:17 +0200 (MET DST)",
"msg_from": "David Gross <[email protected]>",
"msg_from_op": true,
"msg_subject": "using C++ to define new functions"
},
{
"msg_contents": "David Gross wrote:\n> \n> Hello,\n> \n> Sorry to disturb the hacker list for a question that seems not so hard.\n> But I got no answers elsewhere...\n> Is is definitively impossible to use C++ to define new SQL functions or operators\n> in SQL ? (I need this in order to use a big geometric library, to produce e.g. polygon\n> intersections, or harder).\n>[...]\n> but the linking style of c++ is rather different.\n> Do you have an idea ?\n\nHow about using extern \"C\" {...} ?\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 2148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n",
"msg_date": "Mon, 20 Jul 1998 01:46:27 +0200",
"msg_from": "Michal Mosiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] using C++ to define new functions"
},
{
"msg_contents": "Michal Mosiewicz <[email protected]> writes:\n> David Gross wrote:\n>> but the linking style of c++ is rather different.\n>> Do you have an idea ?\n\n> How about using extern \"C\" {...} ?\n\nI think the major problem here is that David would probably like the\nconstructors for any global-level variables in his C++ code to be called\nwhen his shared library is loaded into the backend. (If his C++ code\nhasn't got *any* global variables with nontrivial constructors, then\nhe could maybe survive without this. But it'd be a necessary part of\na general-purpose solution.)\n\nThis is doable. I routinely use a system that does dynamic loading\nof C++ code (Ptolemy, http://ptolemy.eecs.berkeley.edu). It's fairly\nmessy and unportable however, because you have to be aware of the\nmachine-and-compiler-dependent conventions for naming and finding\nthe global constructors.\n\nDavid would probably also want to link the C++ library into the backend\n(as a shared library, otherwise the linker will optimize it away) so\nthat his shared library doesn't need to include C++ library routines.\nThere might be a few other little changes to make in the link that\nbuilds the backend.\n\nIn short, this could be supported if we wanted to invest a sufficient\namount of effort in it. I'm not sure it's worth the trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Jul 1998 10:48:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] using C++ to define new functions "
}
] |
[
{
"msg_contents": "I have renamed Rel to RelOptInfo, and changed the target entry tag from\nTLE to TARGETENTRY. Will require new initdb for developers. Sorry.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 18 Jul 1998 10:44:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rel structure renamed"
}
] |
[
{
"msg_contents": "\nI have made the following changes:\n\n\nAdd auto-size display onscreen to \\d? commands. Use UNION to show all\n\\d? results in one query. Add \\d? field regex feature. Rename MB to\nMULTIBYTE.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 18 Jul 1998 15:39:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Change to psql and MB"
}
] |
[
{
"msg_contents": "I have applied this fine patch from Stephan.\n\nIt fixes many problems with Having, and some other problems that exist.\n\nVadim, can you check on the psort_end() issue, and see where that should\ngo. I am lost in that area.\n\n\n[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi out there!\n> \n> I think I have good news for you: Having is working now! The patch is\n> included in this email (tarred, gnu-zipped and finally uuencoded).\n> \n> But before you go on unpacking the patch let me tell you about the\n> following items:\n> \n> 1) Queries using the having clause on base tables should work well \n> now. Here some tested features, (examples included in the patch):\n> \n> 1.1) Subselects in the having clause\n> 1.2) Double nested subselects\n> 1.3) Subselects used in the where clause and in the having clause\n> simultaneously\n> 1.4) Union Selects using having\n> 1.5) Indexes on the base relations are used correctly\n> 1.6) Unallowed Queries are prevented (e.g. qualifications in the\n> having clause that belong to the where clause)\n> 1.7) Insert into as select \n> \n> 2) Queries using the having clause on view relations also work\n> but there are some restrictions:\n> \n> 2.1) Create View as Select ... Having ...; using base tables in the select\n> does work *BUT*: only simple queries are allowed to this new \n> created view. This is *not* because of the having\n> logic but on the technique used to implement\n> views in postgreSQL, the query rewrite system.\n> \t\n> 2.1.1) The Query rewrite system: \n> As you know, postgreSQL uses a query rewrite system to\n> implement views. It does so by storing the query used to define\n> the view somewhere in the system catalogs. If a user makes a\n> query against the view the system \"rewrites\" the user query by\n> merging it with the stored query (used to define the view). The\n> new \"rewritten\" query is optimized, planned, etc and executed \n> against the base tables from the view definition query. \n> \n> 2.1.2) Why are only simple queries allowed against a view from 2.1) ?\n> The problem with the technique described in 2.1.1) is, that it\n> is unfortunately not possible to merge any two SQL queries in\n> in such a way that the result will behave as expected:\n> consider the following view definition:\n> \n> create view testview as\n> select pid, sid\n> from part\n> where pid=5\n> group by pid;\n> \n> and the following query:\n> \n> select max(pid), sid\n> from testview\n> where sid = 100\n> group by sid;\n> \n> The query rewrite system will produce something like:\n> \n> select max(pid), sid \n> \t from part\n> where pid=5 AND sid = 100 /* no problem here */\n> group by ??? /* which attribute(s) to group by?? */ \n> \n> You see, if the view definition and the query both contain\n> a group clause, we will run into troubles.\n> \n> The solution to this would be the implementation of subselects\n> in the from clause, then the rewrite system would produce:\n> \n> select max(pid), sid\n> from (select pid, sid\n> from part\n> where pid=5\n> group by pid)\n> where sid = 100\n> group by sid;\n> \n> 2.2) Select ... from testview1, testview2, ... having...;\n> does also work, as long as the views used are simple\n> row/column subsets of the baserelations used. (No group clauses\n> in the view definitons)\n> \n> \n> 3) Bug in ExecMergeJoin ??\n> This is something that has *NOTHING* to do with the Having logic!\n> Proof: Try the following query (without having my patch applied):\n> \n> select s.sid, s.sname, s.city\n> from supplier s\n> where s.sid+10 in (select se1.pid\n> from supplier s1, sells se1, part p1\n> where s1.sid=se1.sid and \n> s1.sname=s.sname and \n> se1.pid=p1.pid);\n> \n> (The file 'queries/create_insert.sql' included in the patch contains the\n> data for this, the query is included in 'queries/having.sql' !)\n> \n> As you can see, there is neither a having qual nor an aggregate\n> function used in the above query an you will see, it will fail!\n> \n> I found out that the reason for this is in the function \n> 'ExecMergeJoin()' in \n> switch (mergestate->mj_JoinState)\n> \t\t{ \n> ....\n> \t\t case EXEC_MJ_NEXTOUTER:\n> ....\n> CleanUpSort(node->join.lefttree->lefttree);\n> CleanUpSort(node->join.righttree->lefttree);\n> \t ....\n> }\t\t\n> \n> In 'CleanUpSort()' the function 'psort_end()' gets called and\n> closes down the sort, which is correct as long as no subselects \n> are in use!\n> \n> I found out, that the bug does not appear when I comment the call\n> to 'psort_end()' out in 'CleanUpSort()'.\n> \n> I heavily tested the system after that and things worked well but\n> maybe this is just a lucky chance.\n> \n> So please, if anybody who has good knowledge of that part of the\n> code could have a look at that it would be great!\n> \n> I am sure the sort has to be ended at some time calling 'psort_end()'\n> but I did not have enough time to look for the right place. I was\n> just happy about the fact it produced some correct results and\n> stopped working on that.\n> \n> \n> 4) Test Examples included:\n> in the patch there is a directory 'queries' containing the\n> following files:\n> create_insert.sql to create the test relations and views\n> destroy.sql to drop the test relations and views\n> having.sql the test queries on base relations\n> view_having.sql the test queries on / defining views\n> \n> 5) The patch is against the original v6.3.2 and can be applied\n> by:\n> cd ..../postgresql-6.3.2/\n> patch [-p2] < having_6.3.2.diff\n> \n> \n> \n> \n> Regards Stefan\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 01:48:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Having Patch (against v6.3.2)"
},
{
"msg_contents": "> > 3) Bug in ExecMergeJoin ??\n> > This is something that has *NOTHING* to do with the Having logic!\n> > Proof: Try the following query (without having my patch applied):\n> > \n> > select s.sid, s.sname, s.city\n> > from supplier s\n> > where s.sid+10 in (select se1.pid\n> > from supplier s1, sells se1, part p1\n> > where s1.sid=se1.sid and \n> > s1.sname=s.sname and \n> > se1.pid=p1.pid);\n> > \n> > (The file 'queries/create_insert.sql' included in the patch contains the\n> > data for this, the query is included in 'queries/having.sql' !)\n> > \n> > As you can see, there is neither a having qual nor an aggregate\n> > function used in the above query an you will see, it will fail!\n> > \n> > I found out that the reason for this is in the function \n> > 'ExecMergeJoin()' in \n> > switch (mergestate->mj_JoinState)\n> > \t\t{ \n> > ....\n> > \t\t case EXEC_MJ_NEXTOUTER:\n> > ....\n> > CleanUpSort(node->join.lefttree->lefttree);\n> > CleanUpSort(node->join.righttree->lefttree);\n> > \t ....\n> > }\t\t\n> > \n> > In 'CleanUpSort()' the function 'psort_end()' gets called and\n> > closes down the sort, which is correct as long as no subselects \n> > are in use!\n> > \n> > I found out, that the bug does not appear when I comment the call\n> > to 'psort_end()' out in 'CleanUpSort()'.\n> > \n> > I heavily tested the system after that and things worked well but\n> > maybe this is just a lucky chance.\n> > \n> > So please, if anybody who has good knowledge of that part of the\n> > code could have a look at that it would be great!\n> > \n> > I am sure the sort has to be ended at some time calling 'psort_end()'\n> > but I did not have enough time to look for the right place. I was\n> > just happy about the fact it produced some correct results and\n> > stopped working on that.\n\nI have looked into this, and it appears you are correct. The\npsort_end() gets called with the T_Sort node is closed. Why they are\ntrying to close it in the Merge makes no sense to me. The psort calling\ncode in the executor was cleaned up around 6.2 because there were some\nstrange blocks of code in this section. Perhaps this is another area\nwhere the code was called when it should not have been.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 05:59:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Having Patch (against v6.3.2)"
},
{
"msg_contents": "> > 3) Bug in ExecMergeJoin ??\n> > This is something that has *NOTHING* to do with the Having logic!\n> > Proof: Try the following query (without having my patch applied):\n> > \n> > ....\n> > CleanUpSort(node->join.lefttree->lefttree);\n> > CleanUpSort(node->join.righttree->lefttree);\n> > \t ....\n> > }\t\t\n> > \n> > In 'CleanUpSort()' the function 'psort_end()' gets called and\n> > closes down the sort, which is correct as long as no subselects \n> > are in use!\n\nI looked at the Mariposa code, which has some fixes, and they have\nremoved the call to CleanUpSort in mergejoin, so that verifies the fix\nis correct. I am removing the entire CleanUpSort function and calls from\nmergejoin.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 06:06:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Having Patch (against v6.3.2)"
}
] |
[
{
"msg_contents": "I want to remove the recipe, tioga, and Tee node code from the system. \nThey are all interdependant.\n\nI don't believe any of the code is useful. It was designed for a\nspecial tioga application that uses our database for a backend. \nProbably not used by anyone anymore. It is not even enabled to compile.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 06:19:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Removal of recipe/tioga/Tee node"
},
{
"msg_contents": "On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> I want to remove the recipe, tioga, and Tee node code from the system. \n> They are all interdependant.\n> \n> I don't believe any of the code is useful. It was designed for a\n> special tioga application that uses our database for a backend. \n> Probably not used by anyone anymore. It is not even enabled to compile.\n\nIs this related to the DataSplash package?\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n",
"msg_date": "Sun, 19 Jul 1998 10:48:07 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Removal of recipe/tioga/Tee node"
},
{
"msg_contents": "> On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> > I want to remove the recipe, tioga, and Tee node code from the system. \n> > They are all interdependant.\n> > \n> > I don't believe any of the code is useful. It was designed for a\n> > special tioga application that uses our database for a backend. \n> > Probably not used by anyone anymore. It is not even enabled to compile.\n> \n> Is this related to the DataSplash package?\n\nI think so.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 12:34:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Removal of recipe/tioga/Tee node"
},
{
"msg_contents": "Hi Bruce, last Sunday you helped me with libpq.so.1. Now I'm in trouble\nto start psql. I think that all is clean, but I get allways the same\nerror message when starting psql. The whole week I can only see this:\n\nmarliesle$ ps ax | grep post\n 3140 p1 S 0:00 postmaster -d 3 -i \n 6623 p1 S 0:00 grep post \nmarliesle$ kill 3140\nmarliesle$ postmaster -i &\n[1] 6624\nmarliesle$ psql verlag\nConnection to database 'verlag' failed.\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally before or\nwhile processing the request.\nmarliesle$ \n\nA etags search points me to src/interfaces/libpq/fe-misc.c. I am using the\nlatest CVS. I hope someone will introduce cvsweb so I can see any changes\nto the source tree.\n\n-Egon\n\n\n\n",
"msg_date": "Sun, 19 Jul 1998 19:57:40 +0200 (MET DST)",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": false,
"msg_subject": "pqReadData()"
},
{
"msg_contents": "On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> > On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> > > I want to remove the recipe, tioga, and Tee node code from the system. \n> > > They are all interdependant.\n> > > \n> > > I don't believe any of the code is useful. It was designed for a\n> > > special tioga application that uses our database for a backend. \n> > > Probably not used by anyone anymore. It is not even enabled to compile.\n> > \n> > Is this related to the DataSplash package?\n> \n> I think so.\n\nI tried DataSplash out a few months ago and it worked. If anything that\ncode should be ifdef0'ed out with comments to indicate what it is for.\nRemoving wholesale doesn't seem like a very good approach.\n\nDataSplash, despite its problems, looked fairly nifty.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n",
"msg_date": "Sun, 19 Jul 1998 14:15:14 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Removal of recipe/tioga/Tee node"
},
{
"msg_contents": "> Hi Bruce, last Sunday you helped me with libpq.so.1. Now I'm in trouble\n> to start psql. I think that all is clean, but I get allways the same\n> error message when starting psql. The whole week I can only see this:\n> \n> marliesle$ ps ax | grep post\n> 3140 p1 S 0:00 postmaster -d 3 -i \n> 6623 p1 S 0:00 grep post \n> marliesle$ kill 3140\n> marliesle$ postmaster -i &\n> [1] 6624\n> marliesle$ psql verlag\n> Connection to database 'verlag' failed.\n> pqReadData() -- backend closed the channel unexpectedly.\n> \tThis probably means the backend terminated abnormally before or\n> while processing the request.\n> marliesle$ \n\n\nRun the postmaster as:\n\n\tnohup postmaster -i >log 2>&1 &\n\nand check the log file after the failure. You may also need to rerun\ninitdb because the source tree is changing so much.\n\n> \n> A etags search points me to src/interfaces/libpq/fe-misc.c. I am using the\n> latest CVS. I hope someone will introduce cvsweb so I can see any changes\n> to the source tree.\n> \n> -Egon\n> \n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 14:35:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pqReadData()"
},
{
"msg_contents": "> On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> > > On Sun, 19 Jul 1998, Bruce Momjian wrote:\n> > > > I want to remove the recipe, tioga, and Tee node code from the system. \n> > > > They are all interdependant.\n> > > > \n> > > > I don't believe any of the code is useful. It was designed for a\n> > > > special tioga application that uses our database for a backend. \n> > > > Probably not used by anyone anymore. It is not even enabled to compile.\n> > > \n> > > Is this related to the DataSplash package?\n> > \n> > I think so.\n> \n> I tried DataSplash out a few months ago and it worked. If anything that\n> code should be ifdef0'ed out with comments to indicate what it is for.\n> Removing wholesale doesn't seem like a very good approach.\n> \n> DataSplash, despite its problems, looked fairly nifty.\n\nIt was removed from Mariposa, and we never compile it. If DataSplash\nworked, they must not be using the old code we have anymore either.\n\nIt has something to do with Tioga recipes and variable length arrays. \nWe will keep it around for later use.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 14:53:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Removal of recipe/tioga/Tee node"
}
] |
[
{
"msg_contents": "Having found another bug that was fixed in Mariposa (the short-lived\nsuccessor to Postgres95 at Berkeley at\nhttp://mariposa.CS.Berkeley.EDU:8000/mariposa/) that was broken in our\ncode, I am going to go through all their sources and see if I see any\nother fixes we should have.\n \n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 12:52:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mariposa"
}
] |
[
{
"msg_contents": "> Hi,\n> \n> I finally got the time to put together some stuff for fti for inclusion \n> in pgsql. I have included a README which should be enough to start using \n> it, plus a BENCH file that describes some timings I have done.\n> \n> Please have a look at it, and if you think everything is OK, I would like \n> it seen included in the contrib-section of pgsql.\n> \n> I don't think I will do any more work in this, but maybe it inspires \n> somebody else to improve on it.\n> \n> Maarten Boekhold\n\nInstalled in contrib as fulltextindex.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sun, 19 Jul 1998 14:32:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: full text indexing for PostgreSQL"
}
] |
[
{
"msg_contents": "i didn't realize that anybody else was working on an IP address\ndata type or i'd've posted this six months ago when i first wrote\nit. it lacks only the stuff needed to make it usable as a UNIQUE\nKEY. it depends on BIND-8's libraries.\n\n#!/bin/sh\n# This is a shell archive (produced by GNU sharutils 4.2).\n# To extract the files from this archive, save it to some FILE, remove\n# everything before the `!/bin/sh' line above, then type `sh FILE'.\n#\n# Made on 1998-07-19 12:25 PDT by <[email protected]>.\n# Source directory was `/tmp_mnt/mb/mb0/user/vixie/src/postgres-cidrtype'.\n#\n# Existing files will *not* be overwritten unless `-c' is specified.\n#\n# This shar contains:\n# length mode name\n# ------ ---------- ------------------------------------------\n# 671 -r--r--r-- Makefile\n# 4572 -r--r--r-- cidr.c\n# 2877 -r--r--r-- cidr.source\n# 3068 -r--r--r-- cidr.sql\n#\nsave_IFS=\"${IFS}\"\nIFS=\"${IFS}:\"\ngettext_dir=FAILED\nlocale_dir=FAILED\nfirst_param=\"$1\"\nfor dir in $PATH\ndo\n if test \"$gettext_dir\" = FAILED && test -f $dir/gettext \\\n && ($dir/gettext --version >/dev/null 2>&1)\n then\n set `$dir/gettext --version 2>&1`\n if test \"$3\" = GNU\n then\n gettext_dir=$dir\n fi\n fi\n if test \"$locale_dir\" = FAILED && test -f $dir/shar \\\n && ($dir/shar --print-text-domain-dir >/dev/null 2>&1)\n then\n locale_dir=`$dir/shar --print-text-domain-dir`\n fi\ndone\nIFS=\"$save_IFS\"\nif test \"$locale_dir\" = FAILED || test \"$gettext_dir\" = FAILED\nthen\n echo=echo\nelse\n TEXTDOMAINDIR=$locale_dir\n export TEXTDOMAINDIR\n TEXTDOMAIN=sharutils\n export TEXTDOMAIN\n echo=\"$gettext_dir/gettext -s\"\nfi\ntouch -am 1231235999 $$.touch >/dev/null 2>&1\nif test ! -f 1231235999 && test -f $$.touch; then\n shar_touch=touch\nelse\n shar_touch=:\n echo\n $echo 'WARNING: not restoring timestamps. Consider getting and'\n $echo \"installing GNU \\`touch', distributed in GNU File Utilities...\"\n echo\nfi\nrm -f 1231235999 $$.touch\n#\nif mkdir _sh17086; then\n $echo 'x -' 'creating lock directory'\nelse\n $echo 'failed to create lock directory'\n exit 1\nfi\n# ============= Makefile ==============\nif test -f 'Makefile' && test \"$first_param\" != -c; then\n $echo 'x -' SKIPPING 'Makefile' '(file already exists)'\nelse\n $echo 'x -' extracting 'Makefile' '(text)'\n sed 's/^X//' << 'SHAR_EOF' > 'Makefile' &&\nifndef PGDIR\nPGDIR= /db0/local/postgresql-6.2\nendif\nX\nSRCDIR= $(PGDIR)/src\nX\ninclude $(SRCDIR)/Makefile.global\nX\nCFLAGS+= -I$(PGDIR)/include -I$(PGDIR)/src/include -I$(LIBPQDIR)\nCFLAGS+= -I/usr/local/bind/include\nX\nCLIBS+= -L/usr/local/bind/lib -lbind\nX\nTARGETS= cidr.sql cidr${DLSUFFIX}\nX\nDLSUFFIX=.so\nX\nall:\t$(TARGETS)\nX\ncidr${DLSUFFIX}: cidr.o\nX\tshlicc2 -r -o cidr${DLSUFFIX} cidr.o -L/usr/local/bind/lib -lbind\nX\ninstall:\nX\t$(MAKE) all\nX\tcp -p cidr$(DLSUFFIX) $(LIBDIR)\nX\n%.sql: %.source\nX\trm -f $@; C=`pwd`; O=$$C; \\\nX\tif [ -d ${LIBDIR} ]; then O=${LIBDIR}; fi; \\\nX\tsed -e \"s:_OBJWD_:$$O:g\" \\\nX\t -e \"s:_DLSUFFIX_:$(DLSUFFIX):g\" \\\nX\t < $< > $@\nX\nclean: \nX\trm -f $(TARGETS) cidr.o\nX\nSHAR_EOF\n $shar_touch -am 1108213897 'Makefile' &&\n chmod 0444 'Makefile' ||\n $echo 'restore of' 'Makefile' 'failed'\n if ( md5sum --help 2>&1 | grep 'sage: md5sum \\[' ) >/dev/null 2>&1 \\\n && ( md5sum --version 2>&1 | grep -v 'textutils 1.12' ) >/dev/null; then\n md5sum -c << SHAR_EOF >/dev/null 2>&1 \\\n || $echo 'Makefile:' 'MD5 check failed'\necb325bcab4a92f4fd5657cdc29a9f63 Makefile\nSHAR_EOF\n else\n shar_count=\"`LC_ALL= LC_CTYPE= LANG= wc -c < 'Makefile'`\"\n test 671 -eq \"$shar_count\" ||\n $echo 'Makefile:' 'original size' '671,' 'current size' \"$shar_count!\"\n fi\nfi\n# ============= cidr.c ==============\nif test -f 'cidr.c' && test \"$first_param\" != -c; then\n $echo 'x -' SKIPPING 'cidr.c' '(file already exists)'\nelse\n $echo 'x -' extracting 'cidr.c' '(text)'\n sed 's/^X//' << 'SHAR_EOF' > 'cidr.c' &&\n/*\nX * cidr.c - Internal Classless InterDomain Routing entities for PostGreSQL\nX *\nX * Paul Vixie <[email protected]>, Internet Software Consortium, October 1997.\nX *\nX * $Id: cidr.c,v 1.4 1998/07/15 19:36:56 vixie Exp $\nX */\nX\n/* Import. */\nX\n#include <sys/types.h>\n#include <sys/socket.h>\nX\n#include <ctype.h>\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\nX\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <isc/misc.h>\nX\n#include \"postgres.h\"\n#include \"utils/palloc.h\"\nX\n/* Define. */\nX\n#define cidr_min(a, b) (((a) < (b)) ? (a) : (b))\nX\ntypedef struct {\nX\tunsigned char\tfamily;\nX\tunsigned char\tbits;\nX\tunsigned char\tbytes[1];\t/* This is really an open array. */\n} cidr;\nX\n#define\tcidr_addrsize(fam) ((fam) == AF_INET ? 4 : -1)\n#define cidr_size(addrsize) (sizeof(cidr) - sizeof(unsigned char) + addrsize)\nX\n/* Export. */\nX\ncidr *\tcidr_in(const char *);\nchar *\tcidr_out(const cidr *);\nX\nbool\tcidr_eq(const cidr *, const cidr *);\nbool\tcidr_ne(const cidr *, const cidr *);\nbool\tcidr_lt(const cidr *, const cidr *);\nbool\tcidr_gt(const cidr *, const cidr *);\nbool\tcidr_le(const cidr *, const cidr *);\nbool\tcidr_ge(const cidr *, const cidr *);\nbool\tcidr_sub(const cidr *, const cidr *);\nbool\tcidr_subeq(const cidr *, const cidr *);\nbool\tcidr_sup(const cidr *, const cidr *);\nbool\tcidr_supeq(const cidr *, const cidr *);\nint4\tcidr_span(const cidr *, const cidr *);\nint4\tcidr_cmp(const cidr *, const cidr *);\nX\n/* Functions. */\nX\ncidr *\ncidr_in(const char *src) {\nX\tint bits, bytes;\nX\tcidr *dst;\nX\nX\tbytes = cidr_addrsize(AF_INET);\nX\tif (bytes == -1) {\nX\t\telog(WARN, \"Programming error in cidr_in()\");\nX\t\treturn (NULL);\nX\t}\nX\tdst = palloc(cidr_size(bytes));\nX\tif (dst == NULL) {\nX\t\telog(WARN, \"Unable to allocate memory in cidr_in()\");\nX\t\treturn (NULL);\nX\t}\nX\tbits = inet_net_pton(AF_INET, src, &dst->bytes, bytes);\nX\tif (bits < 0 || bits > 32) {\nX\t\telog(WARN, \"Bad CIDR expression (%s)\", src);\nX\t\tpfree(dst);\nX\t\treturn (NULL);\nX\t}\nX\tdst->bits = (unsigned char)bits;\nX\treturn (dst);\n}\nX\nchar *\ncidr_out(const cidr *src) {\nX\tchar *dst, tmp[sizeof \"255.255.255.255/32\"];\nX\nX\tif (inet_net_ntop(AF_INET, &src->bytes, src->bits,\nX\t\t\t tmp, sizeof tmp) < 0) {\nX\t\telog(WARN, \"Unable to format CIDR (%s)\", strerror(errno));\nX\t\tpfree(dst);\nX\t\treturn (NULL);\nX\t}\nX\tdst = palloc(strlen(tmp) + 1);\nX\tif (dst == NULL) {\nX\t\telog(WARN, \"Unable to allocate memory in cidr_out()\");\nX\t\treturn (NULL);\nX\t}\nX\tstrcpy(dst, tmp);\nX\treturn (dst);\n}\nX\n/* Equality. */\nX\nbool\ncidr_eq(const cidr *lhs, const cidr *rhs) {\nX\treturn (lhs->bits == rhs->bits &&\nX\t\tbitncmp(lhs->bytes, rhs->bytes, lhs->bits) == 0);\n}\nX\nbool\ncidr_ne(const cidr *lhs, const cidr *rhs) {\nX\treturn (!cidr_eq(lhs, rhs));\n}\nX\n/* Ordering. */\nX\nbool\ncidr_lt(const cidr *lhs, const cidr *rhs) {\nX\tint x = bitncmp(lhs->bytes, rhs->bytes,\nX\t\t\tcidr_min(lhs->bits, rhs->bits));\nX\nX\treturn (x < 0 || (x == 0 && lhs->bits < rhs->bits));\n}\nX\nbool\ncidr_le(const cidr *lhs, const cidr *rhs) {\nX\treturn (cidr_lt(lhs, rhs) || cidr_eq(lhs, rhs));\n}\nX\nbool\ncidr_gt(const cidr *lhs, const cidr *rhs) {\nX\tint x = bitncmp(lhs->bytes, rhs->bytes,\nX\t\t\tcidr_min(lhs->bits, rhs->bits));\nX\nX\treturn (x > 0 || (x == 0 && lhs->bits > rhs->bits));\n}\nX\nbool\ncidr_ge(const cidr *lhs, const cidr *rhs) {\nX\treturn (cidr_gt(lhs, rhs) || cidr_eq(lhs, rhs));\n}\nX\n/* Subnetting. */\nX\nbool\ncidr_sub(const cidr *lhs, const cidr *rhs) {\nX\treturn (lhs->bits > rhs->bits &&\nX\t\tbitncmp(lhs->bytes, rhs->bytes, rhs->bits) == 0);\n}\nX\nbool\ncidr_subeq(const cidr *lhs, const cidr *rhs) {\nX\treturn (lhs->bits >= rhs->bits &&\nX\t\tbitncmp(lhs->bytes, rhs->bytes, rhs->bits) == 0);\n}\nX\n/* Supernetting. */\nX\nbool\ncidr_sup(const cidr *lhs, const cidr *rhs) {\nX\treturn (lhs->bits < rhs->bits &&\nX\t\tbitncmp(lhs->bytes, rhs->bytes, lhs->bits) == 0);\n}\nX\nbool\ncidr_supeq(const cidr *lhs, const cidr *rhs) {\nX\treturn (lhs->bits <= rhs->bits &&\nX\t\tbitncmp(lhs->bytes, rhs->bytes, lhs->bits) == 0);\n}\nX\nint4\ncidr_span(const cidr *lhs, const cidr *rhs) {\nX\tconst u_char *l = lhs->bytes, *r = rhs->bytes;\nX\tint n = cidr_min(lhs->bits, rhs->bits);\nX\tint b = n >> 3;\nX\tint4 result = 0;\nX\tu_int lb, rb;\nX\nX\t/* Find out how many full octets match. */\nX\twhile (b > 0 && *l == *r)\nX\t\tb--, l++, r++, result += 8;\nX\t/* Find out how many bits to check. */\nX\tif (b == 0)\nX\t\tb = n & 07;\nX\telse\nX\t\tb = 8;\nX\t/* Find out how many bits match. */\nX\tlb = *l, rb = *r;\nX\twhile (b > 0 && (lb & 0x80) == (rb & 0x80))\nX\t\tb--, lb <<= 1, rb <<= 1, result++;\nX\treturn (result);\n}\nX\nint4\ncidr_cmp(const cidr *lhs, const cidr *rhs) {\nX\tint x = bitncmp(lhs->bytes, rhs->bytes,\nX\t\t\tcidr_min(lhs->bits, rhs->bits));\nX\nX\tif (x < 0)\nX\t\treturn (-1);\nX\tif (x > 0)\nX\t\treturn (1);\nX\treturn (0);\n}\nSHAR_EOF\n $shar_touch -am 0715123698 'cidr.c' &&\n chmod 0444 'cidr.c' ||\n $echo 'restore of' 'cidr.c' 'failed'\n if ( md5sum --help 2>&1 | grep 'sage: md5sum \\[' ) >/dev/null 2>&1 \\\n && ( md5sum --version 2>&1 | grep -v 'textutils 1.12' ) >/dev/null; then\n md5sum -c << SHAR_EOF >/dev/null 2>&1 \\\n || $echo 'cidr.c:' 'MD5 check failed'\nf8fd720dbffa7ab05d594c9953b75170 cidr.c\nSHAR_EOF\n else\n shar_count=\"`LC_ALL= LC_CTYPE= LANG= wc -c < 'cidr.c'`\"\n test 4572 -eq \"$shar_count\" ||\n $echo 'cidr.c:' 'original size' '4572,' 'current size' \"$shar_count!\"\n fi\nfi\n# ============= cidr.source ==============\nif test -f 'cidr.source' && test \"$first_param\" != -c; then\n $echo 'x -' SKIPPING 'cidr.source' '(file already exists)'\nelse\n $echo 'x -' extracting 'cidr.source' '(text)'\n sed 's/^X//' << 'SHAR_EOF' > 'cidr.source' &&\n---------------------------------------------------------------------------\n--\n-- cidr.sql-\n-- This file defines operators Classless InterDomain Routing entities.\n--\n---------------------------------------------------------------------------\nX\nLOAD '_OBJWD_/cidr_DLSUFFIX_';\nX\nCREATE FUNCTION cidr_in(opaque)\nX\tRETURNS cidr\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE FUNCTION cidr_out(opaque)\nX\tRETURNS opaque\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE TYPE cidr (\nX\tinternallength = 6,\nX\tinput = cidr_in,\nX\toutput = cidr_out\n);\nX\nCREATE FUNCTION cidr_cmp(cidr, cidr)\nX\tRETURNS int4\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\n-----------------------------\n-- Create operators\n-----------------------------\nX\n-- equality (=)\nX\nCREATE FUNCTION cidr_eq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR = (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_eq,\nX\tcommutator = =\n);\nX\n-- inequality (<>)\nX\nCREATE FUNCTION cidr_ne(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <> (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_ne,\nX\tcommutator = <>\n);\nX\n-- less (<, <=)\nX\nCREATE FUNCTION cidr_lt(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR < (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_lt\n);\nX\nCREATE FUNCTION cidr_le(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_le\n);\nX\n-- greater (>, >=)\nX\nCREATE FUNCTION cidr_gt(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR > (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_gt\n);\nX\nCREATE FUNCTION cidr_ge(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_ge\n);\nX\n-- subnet (<<, <<=)\nX\nCREATE FUNCTION cidr_sub(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR << (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_sub\n);\nX\nCREATE FUNCTION cidr_subeq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <<= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_subeq\n);\nX\n-- supernet (>>, >>=)\nX\nCREATE FUNCTION cidr_sup(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >> (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_sup\n);\nX\nCREATE FUNCTION cidr_supeq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >>= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_supeq\n);\nX\n-- spanning (length of prefix match)\nX\nCREATE FUNCTION cidr_span(cidr, cidr)\nX\tRETURNS int4\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nX\nCREATE FUNCTION cidr_masklen(cidr)\nX\tRETURNS int2\nX\tAS '_OBJWD_/cidr_DLSUFFIX_'\nX\tLANGUAGE 'c';\nSHAR_EOF\n $shar_touch -am 0719122498 'cidr.source' &&\n chmod 0444 'cidr.source' ||\n $echo 'restore of' 'cidr.source' 'failed'\n if ( md5sum --help 2>&1 | grep 'sage: md5sum \\[' ) >/dev/null 2>&1 \\\n && ( md5sum --version 2>&1 | grep -v 'textutils 1.12' ) >/dev/null; then\n md5sum -c << SHAR_EOF >/dev/null 2>&1 \\\n || $echo 'cidr.source:' 'MD5 check failed'\ndca27b8d433d030e5049bb04ad15df03 cidr.source\nSHAR_EOF\n else\n shar_count=\"`LC_ALL= LC_CTYPE= LANG= wc -c < 'cidr.source'`\"\n test 2877 -eq \"$shar_count\" ||\n $echo 'cidr.source:' 'original size' '2877,' 'current size' \"$shar_count!\"\n fi\nfi\n# ============= cidr.sql ==============\nif test -f 'cidr.sql' && test \"$first_param\" != -c; then\n $echo 'x -' SKIPPING 'cidr.sql' '(file already exists)'\nelse\n $echo 'x -' extracting 'cidr.sql' '(text)'\n sed 's/^X//' << 'SHAR_EOF' > 'cidr.sql' &&\n---------------------------------------------------------------------------\n--\n-- cidr.sql-\n-- This file defines operators Classless InterDomain Routing entities.\n--\n---------------------------------------------------------------------------\nX\nLOAD '/var/home/vixie/postgres-cidrtype/cidr.so';\nX\nCREATE FUNCTION cidr_in(opaque)\nX\tRETURNS cidr\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE FUNCTION cidr_out(opaque)\nX\tRETURNS opaque\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE TYPE cidr (\nX\tinternallength = 5,\nX\tinput = cidr_in,\nX\toutput = cidr_out\n);\nX\nCREATE FUNCTION cidr_cmp(cidr, cidr)\nX\tRETURNS int4\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\n-----------------------------\n-- Create operators\n-----------------------------\nX\n-- equality (=)\nX\nCREATE FUNCTION cidr_eq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR = (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_eq,\nX\tcommutator = =\n);\nX\n-- inequality (<>)\nX\nCREATE FUNCTION cidr_ne(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <> (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_ne,\nX\tcommutator = <>\n);\nX\n-- less (<, <=)\nX\nCREATE FUNCTION cidr_lt(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR < (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_lt\n);\nX\nCREATE FUNCTION cidr_le(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_le\n);\nX\n-- greater (>, >=)\nX\nCREATE FUNCTION cidr_gt(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR > (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_gt\n);\nX\nCREATE FUNCTION cidr_ge(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_ge\n);\nX\n-- subnet (<<, <<=)\nX\nCREATE FUNCTION cidr_sub(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR << (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_sub\n);\nX\nCREATE FUNCTION cidr_subeq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR <<= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_subeq\n);\nX\n-- supernet (>>, >>=)\nX\nCREATE FUNCTION cidr_sup(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >> (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_sup\n);\nX\nCREATE FUNCTION cidr_supeq(cidr, cidr)\nX\tRETURNS bool\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nX\nCREATE OPERATOR >>= (\nX\tleftarg = cidr,\nX\trightarg = cidr,\nX\tprocedure = cidr_supeq\n);\nX\n-- spanning (length of prefix match)\nX\nCREATE FUNCTION cidr_span(cidr, cidr)\nX\tRETURNS int4\nX\tAS '/var/home/vixie/postgres-cidrtype/cidr.so'\nX\tLANGUAGE 'c';\nSHAR_EOF\n $shar_touch -am 1201183797 'cidr.sql' &&\n chmod 0444 'cidr.sql' ||\n $echo 'restore of' 'cidr.sql' 'failed'\n if ( md5sum --help 2>&1 | grep 'sage: md5sum \\[' ) >/dev/null 2>&1 \\\n && ( md5sum --version 2>&1 | grep -v 'textutils 1.12' ) >/dev/null; then\n md5sum -c << SHAR_EOF >/dev/null 2>&1 \\\n || $echo 'cidr.sql:' 'MD5 check failed'\n097a4f0f2b5915fc5478976233c714f3 cidr.sql\nSHAR_EOF\n else\n shar_count=\"`LC_ALL= LC_CTYPE= LANG= wc -c < 'cidr.sql'`\"\n test 3068 -eq \"$shar_count\" ||\n $echo 'cidr.sql:' 'original size' '3068,' 'current size' \"$shar_count!\"\n fi\nfi\nrm -fr _sh17086\nexit 0\n",
"msg_date": "Sun, 19 Jul 1998 12:27:13 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "cidr"
},
{
"msg_contents": "Paul A Vixie <[email protected]> writes:\n\n> i didn't realize that anybody else was working on an IP address\n> data type or i'd've posted this six months ago when i first wrote\n> it. it lacks only the stuff needed to make it usable as a UNIQUE\n> KEY. it depends on BIND-8's libraries.\n\nInteresting -- looks nice at first glance, and does some things that\nneither Aleksei nor I had thought of. I guess a merge of the three\nvariations is in order. At least I'll be doing that locally, and will\nmake the result available.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n",
"msg_date": "20 Jul 1998 22:25:32 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Paul A Vixie <[email protected]> writes:\n> \n> > i didn't realize that anybody else was working on an IP address\n> > data type or i'd've posted this six months ago when i first wrote\n> > it. it lacks only the stuff needed to make it usable as a UNIQUE\n> > KEY. it depends on BIND-8's libraries.\n> \n> Interesting -- looks nice at first glance, and does some things that\n> neither Aleksei nor I had thought of. I guess a merge of the three\n> variations is in order. At least I'll be doing that locally, and will\n> make the result available.\n\nOK, perhaps I will not apply the patch, and wait for a merged version.\n\nComments?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 17:44:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > i didn't realize that anybody else was working on an IP address\n> > data type or i'd've posted this six months ago when i first wrote\n> > it. it lacks only the stuff needed to make it usable as a UNIQUE\n> > KEY. it depends on BIND-8's libraries.\n> \n> Interesting -- looks nice at first glance, and does some things that\n> neither Aleksei nor I had thought of. I guess a merge of the three\n> variations is in order. At least I'll be doing that locally, and will\n> make the result available.\n\ni would be happy if given a chance to consult with whomever wants to do\nthe work of merging the various ipaddr proposals, and would even do some\nwork if appropriate. i would like an indexable \"cidr\" data type (you\nought not call it an ipaddr, it can be either a net or a host, and the\nnet is variable sized, so it really is a \"cidr\") to become part of the\nstandard postgres system. but i mostly want to use it in apps, and i\nmostly wanted to learn how to extend postgres -- i have no undying love\nfor the implementation i posted here, nor do i know the process for making\nthis a standard data type. so, i will help if someone else is driving.\n",
"msg_date": "Mon, 20 Jul 1998 14:45:18 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> > > i didn't realize that anybody else was working on an IP address\n> > > data type or i'd've posted this six months ago when i first wrote\n> > > it. it lacks only the stuff needed to make it usable as a UNIQUE\n> > > KEY. it depends on BIND-8's libraries.\n> > \n> > Interesting -- looks nice at first glance, and does some things that\n> > neither Aleksei nor I had thought of. I guess a merge of the three\n> > variations is in order. At least I'll be doing that locally, and will\n> > make the result available.\n> \n> i would be happy if given a chance to consult with whomever wants to do\n> the work of merging the various ipaddr proposals, and would even do some\n> work if appropriate. i would like an indexable \"cidr\" data type (you\n> ought not call it an ipaddr, it can be either a net or a host, and the\n> net is variable sized, so it really is a \"cidr\") to become part of the\n> standard postgres system. but i mostly want to use it in apps, and i\n> mostly wanted to learn how to extend postgres -- i have no undying love\n> for the implementation i posted here, nor do i know the process for making\n> this a standard data type. so, i will help if someone else is driving.\n\nSounds like a plan. Paul is a DNS expert, and we have people involved\nwho know PostgreSQL well.\n\nAs far as the name, we just want a name that makes it clear to novices\nwhat the module does. ip_and_mac is pretty clear. I have no idea what\na cidr is. If you can think of a more descriptive name, let's go for\nit.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 19:36:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> As far as the name, we just want a name that makes it clear to novices what\n> the module does. ip_and_mac is pretty clear. I have no idea what a cidr\n> is. If you can think of a more descriptive name, let's go for it.\n\ncidr = classless internet domain routing. it's the \"204.152.184/21\" notation.\n\ni'm not sure we need a type name that makes sense to novices. what we need\nis an example in the \"type range\" column. if we can say that int2's allowed\nranges are 0 to 65535 and have folks get what we mean without further intro,\nthen we can teach novices about cidr by saying that allowable ranges are 0/0\nthrough 255.255.255.255/32.\n",
"msg_date": "Mon, 20 Jul 1998 16:56:17 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Mon, 20 Jul 1998, Paul A Vixie wrote:\n\n> > As far as the name, we just want a name that makes it clear to novices what\n> > the module does. ip_and_mac is pretty clear. I have no idea what a cidr\n> > is. If you can think of a more descriptive name, let's go for it.\n> \n> cidr = classless internet domain routing. it's the \"204.152.184/21\" notation.\n> \n> i'm not sure we need a type name that makes sense to novices. what we need\n> is an example in the \"type range\" column. if we can say that int2's allowed\n> ranges are 0 to 65535 and have folks get what we mean without further intro,\n> then we can teach novices about cidr by saying that allowable ranges are 0/0\n> through 255.255.255.255/32.\n\n\tI have to agree with Paul here...its like mis-representing tuples\nas rows and fields as columns. It means the same, but it *isn't* the\nproper terminology. By using 'ip_and_mac' where it should be 'cidr', we\nare just propogating incorrect terminology...\n\n\tWith that in mind, can we work at having a 'cidr' type as part of\nthe overall system, vs contrib? I know that *I* would use it alot more if\nI didn't have to think of loading it seperately...and I can think of at\nleast two of my projects that I'd use it in...\n\n\tConsidering that we are now up to three ppl out there that are\nwilling to work on this, I think we should be able to come up with a\n'consensus' as to what we are going to be considering \"the standard\" for\nthe base implementation?\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Jul 1998 22:57:27 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> On Mon, 20 Jul 1998, Paul A Vixie wrote:\n> \n> > > As far as the name, we just want a name that makes it clear to novices what\n> > > the module does. ip_and_mac is pretty clear. I have no idea what a cidr\n> > > is. If you can think of a more descriptive name, let's go for it.\n> > \n> > cidr = classless internet domain routing. it's the \"204.152.184/21\" notation.\n> > \n> > i'm not sure we need a type name that makes sense to novices. what we need\n> > is an example in the \"type range\" column. if we can say that int2's allowed\n> > ranges are 0 to 65535 and have folks get what we mean without further intro,\n> > then we can teach novices about cidr by saying that allowable ranges are 0/0\n> > through 255.255.255.255/32.\n\nPaul, yes, I have seen this address style on several machines, and I\nunderstand it supersede the class A,B,C addresses by allowing arbitrary\nnetmasks.\n\nWe can call it cidr. That is fine. I was just concerned that if we put\nit in contrib, that people who have never heard of cidr, like me, can\nrecognize the usefulness of the type for their applications.\n\nAlso, I would assume we can handle old-style non-cidr address just as\ncleanly, so both cidr and non-cidr can use the same type and functions.\n\n\n> \tI have to agree with Paul here...its like mis-representing tuples\n> as rows and fields as columns. It means the same, but it *isn't* the\n> proper terminology. By using 'ip_and_mac' where it should be 'cidr', we\n> are just propagating incorrect terminology...\n> \n> \tWith that in mind, can we work at having a 'cidr' type as part of\n> the overall system, vs contrib? I know that *I* would use it alot more if\n> I didn't have to think of loading it separately...and I can think of at\n> least two of my projects that I'd use it in...\n> \n> \tConsidering that we are now up to three ppl out there that are\n> willing to work on this, I think we should be able to come up with a\n> 'consensus' as to what we are going to be considering \"the standard\" for\n> the base implementation?\n\nYes, I agree, this is a HOT type, and should be installed in the default\nsystem. Contrib is for testing/narrow audience, and this type certainly\nshould be mainstream. This is the third generation of the type, with a\nwide audience. int8 is also coming into the main tree via Thomas.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 23:06:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > As far as the name, we just want a name that makes it clear to novices what\n> > the module does. ip_and_mac is pretty clear. I have no idea what a cidr\n> > is. If you can think of a more descriptive name, let's go for it.\n> \n> cidr = classless internet domain routing. it's the \"204.152.184/21\" notation.\n> \n> i'm not sure we need a type name that makes sense to novices. what we need\n> is an example in the \"type range\" column. if we can say that int2's allowed\n> ranges are 0 to 65535 and have folks get what we mean without further intro,\n> then we can teach novices about cidr by saying that allowable ranges are 0/0\n> through 255.255.255.255/32.\n\nIf we make it a standard type, not contrib, we can add a pg_description\nentry for it so \\dT shows the valid range of values. \nFunctions/operators also get descriptions for \\do and \\df. Should be\neasy.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 23:09:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Bruce Momjian wrote:\n\n> We can call it cidr. That is fine. I was just concerned that if we put\n> it in contrib, that people who have never heard of cidr, like me, can\n> recognize the usefulness of the type for their applications.\n\n\tIMHO, those that will use it, will know what it is...AFAIK, CIDR\nis a pretty generic/standard term, one that I've known for at least 6\nyears now, so it isn't really \"new-style\".\n\n> Yes, I agree, this is a HOT type, and should be installed in the default\n> system. Contrib is for testing/narrow audience, and this type certainly\n> should be mainstream. This is the third generation of the type, with a\n> wide audience. int8 is also coming into the main tree via Thomas.\n\n\tContrib for v6.4, mainstream by v6.5?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Jul 1998 00:17:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> On Mon, 20 Jul 1998, Bruce Momjian wrote:\n> \n> > We can call it cidr. That is fine. I was just concerned that if we put\n> > it in contrib, that people who have never heard of cidr, like me, can\n> > recognize the usefulness of the type for their applications.\n> \n> \tIMHO, those that will use it, will know what it is...AFAIK, CIDR\n> is a pretty generic/standard term, one that I've known for at least 6\n> years now, so it isn't really \"new-style\".\n> \n> > Yes, I agree, this is a HOT type, and should be installed in the default\n> > system. Contrib is for testing/narrow audience, and this type certainly\n> > should be mainstream. This is the third generation of the type, with a\n> > wide audience. int8 is also coming into the main tree via Thomas.\n> \n> \tContrib for v6.4, mainstream by v6.5?\n\nip_and_mac was contrib for 6.3. Why not mainstream for 6.4?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 23:20:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Bruce Momjian wrote:\n> As far as the name, we just want a name that makes it clear to novices\n> what the module does. ip_and_mac is pretty clear. I have no idea what\n> a cidr is. If you can think of a more descriptive name, let's go for\n> it.\n\nI think most people who would use the IP related types do know what a CIDR\nis.\n\nAn IP address should be just that, a discrete IP, no netmask, nothing.\n\nA CIDR is a type able to represent a range of IP addresses (what one of\nthe previous patches did by storing an address and a netmask.)\n\nMAC addresses speak for themselves.\n\nI'll let others describe all the nifty functions that the first two types\nwill/can have (IP - IP, IP - CIDR, CIDR - CIDR).\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n",
"msg_date": "Mon, 20 Jul 1998 23:58:32 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Bruce Momjian wrote:\n\n> > On Mon, 20 Jul 1998, Bruce Momjian wrote:\n> > \n> > > We can call it cidr. That is fine. I was just concerned that if we put\n> > > it in contrib, that people who have never heard of cidr, like me, can\n> > > recognize the usefulness of the type for their applications.\n> > \n> > \tIMHO, those that will use it, will know what it is...AFAIK, CIDR\n> > is a pretty generic/standard term, one that I've known for at least 6\n> > years now, so it isn't really \"new-style\".\n> > \n> > > Yes, I agree, this is a HOT type, and should be installed in the default\n> > > system. Contrib is for testing/narrow audience, and this type certainly\n> > > should be mainstream. This is the third generation of the type, with a\n> > > wide audience. int8 is also coming into the main tree via Thomas.\n> > \n> > \tContrib for v6.4, mainstream by v6.5?\n> \n> ip_and_mac was contrib for 6.3. Why not mainstream for 6.4?\n\n\tThat works even better...:)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Jul 1998 01:06:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Paul, yes, I have seen this address style on several machines, and I\n> understand it supersede the class A,B,C addresses by allowing arbitrary\n> netmasks.\n\nExactly.\n\n> We can call it cidr. That is fine. I was just concerned that if we put\n> it in contrib, that people who have never heard of cidr, like me, can\n> recognize the usefulness of the type for their applications.\n\nCIDR is getting to be pretty well known. Most people who need the type\nshould understand it.\n\n> Also, I would assume we can handle old-style non-cidr address just as\n> cleanly, so both cidr and non-cidr can use the same type and functions.\n\nYes. The old class system is just 3 special cases (Well, 4 really) of\nCIDR.\n\n> Yes, I agree, this is a HOT type, and should be installed in the default\n> system. Contrib is for testing/narrow audience, and this type certainly\n> should be mainstream. This is the third generation of the type, with a\n> wide audience. int8 is also coming into the main tree via Thomas.\n\nI missed some of the earlier discussion. Is there going to be a separate\nIP type or is that just x.x.x.x/32? I like the idea of a host type as\nwell. I would love to sort my IPs and have 198.96.119.99 precede\n198.96.119.100.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 21 Jul 1998 00:20:36 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> \tWith that in mind, can we work at having a 'cidr' type as part of\n> the overall system, vs contrib? I know that *I* would use it alot more if\n> I didn't have to think of loading it seperately...and I can think of at\n> least two of my projects that I'd use it in...\n\nme too. i'm already using it in fact. i just don't know how to make it\nindexable. having it be a standard type, with someone who knows postgres\nmaking it indexable, would be really great for the MAPS project and for\nsome WHOIS/LDAP stuff we're doing here.\n\n> \tConsidering that we are now up to three ppl out there that are\n> willing to work on this, I think we should be able to come up with a\n> 'consensus' as to what we are going to be considering \"the standard\" for\n> the base implementation?\n\ni remain ready to help anyone who promises to drive this thing. and while\ni feel that \"cidr\" is the right name, i don't feel it strongly enough to\nrefuse to help unless that name is chosen. i need the functionality, and\nif it appears under some other name i will use it under that name.\n",
"msg_date": "Mon, 20 Jul 1998 21:44:45 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> Also, I would assume we can handle old-style non-cidr address just as\n> cleanly, so both cidr and non-cidr can use the same type and functions.\n\nthe implementation i sent around yesterday does this just fine. or rather\nit makes useful assumptions if no \"/\" is given, and it always prints the \"/\".\n",
"msg_date": "Mon, 20 Jul 1998 21:51:27 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> > Yes, I agree, this is a HOT type, and should be installed in the default\n> > system. Contrib is for testing/narrow audience, and this type certainly\n> > should be mainstream. This is the third generation of the type, with a\n> > wide audience. int8 is also coming into the main tree via Thomas.\n> \n> I missed some of the earlier discussion. Is there going to be a separate\n> IP type or is that just x.x.x.x/32? I like the idea of a host type as\n> well. I would love to sort my IPs and have 198.96.119.99 precede\n> 198.96.119.100.\n\nMy guess is that it is going to output x.x.x.x/32, but we should supply\na function so they can get just the IP or the mask from the type. That\nway, people who don't want the cidr format can pull out the part they\nwant.\n\nIf they don't specify a netmask when they load the value, perhaps we use\nthe standard class A,B,C netmasks. How you specify a HOST address using\nthe non-cidr format, I really don't know. I am sure the experts will\nhash it out before 6.4 beta on September 1.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 00:53:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > \tWith that in mind, can we work at having a 'cidr' type as part of\n> > the overall system, vs contrib? I know that *I* would use it alot more if\n> > I didn't have to think of loading it seperately...and I can think of at\n> > least two of my projects that I'd use it in...\n> \n> me too. i'm already using it in fact. i just don't know how to make it\n> indexable. having it be a standard type, with someone who knows postgres\n> making it indexable, would be really great for the MAPS project and for\n> some WHOIS/LDAP stuff we're doing here.\n> \n> > \tConsidering that we are now up to three ppl out there that are\n> > willing to work on this, I think we should be able to come up with a\n> > 'consensus' as to what we are going to be considering \"the standard\" for\n> > the base implementation?\n> \n> i remain ready to help anyone who promises to drive this thing. and while\n> i feel that \"cidr\" is the right name, i don't feel it strongly enough to\n> refuse to help unless that name is chosen. i need the functionality, and\n> if it appears under some other name i will use it under that name.\n\nWe will keep the 'cidr' name, as far as I am concerned. People seem to\nknow what it means, and we will mention it is for IP network/host\naddresses.\n\nIn fact, if it is installed in the system, it will be hard for anyone\nlooking for an IP type to miss.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 00:57:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> I missed some of the earlier discussion. Is there going to be a separate\n> IP type or is that just x.x.x.x/32? I like the idea of a host type as\n> well. I would love to sort my IPs and have 198.96.119.99 precede\n> 198.96.119.100.\n\nthe ordering functions given in the implementation i posted here yesterday\ndo that, and they also show 192.5.5/24 as being \"before\" 192.5.5.0/32, which\nis important for those of us who import routing tables into database tables.\n\ni don't see a need for a separate type for /32's; if someone enters just the\ndotted quad (198.96.119.100 for example) the \"/32\" will be assumed. i'd be\nwilling to see the \"/32\" stripped off in the output function since it's a bit\nredundant -- i didn't do that but it's out of habit rather than strong belief.\n\nif folks really can't get behind \"CIDR\" then may i suggest \"INET\"? it's not\na \"NET\" or an \"IPADDR\" or \"INADDR\" or \"INNET\" or \"HOST\". it is capable of\nrepresenting either a network or a host, classlessly. that makes it a CIDR\nto those in the routing or registry business. and before someone asks: no,\nit is not IPv4-specific. my implementation encodes the address family and\nis capable of supporting IPv6 if the \"internallength\" wants to be 13 or if\nsomeone knows how to make it variable-length.\n",
"msg_date": "Mon, 20 Jul 1998 21:59:18 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> > \tWith that in mind, can we work at having a 'cidr' type as part of\n> > the overall system, vs contrib? I know that *I* would use it alot more if\n> > I didn't have to think of loading it seperately...and I can think of at\n> > least two of my projects that I'd use it in...\n> \n> me too. i'm already using it in fact. i just don't know how to make it\n> indexable. having it be a standard type, with someone who knows postgres\n> making it indexable, would be really great for the MAPS project and for\n> some WHOIS/LDAP stuff we're doing here.\n> \n> > \tConsidering that we are now up to three ppl out there that are\n> > willing to work on this, I think we should be able to come up with a\n> > 'consensus' as to what we are going to be considering \"the standard\" for\n> > the base implementation?\n> \n> i remain ready to help anyone who promises to drive this thing. and while\n> i feel that \"cidr\" is the right name, i don't feel it strongly enough to\n> refuse to help unless that name is chosen. i need the functionality, and\n> if it appears under some other name i will use it under that name.\n\nThis could clearly be a KILLER APP/TYPE for us. This is a pretty\nsophisticated use of our type system. Indexing should present no\nproblems. We supply the comparison routines and plug them in, and the\noptimizer automatically uses the indexes.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:01:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> My guess is that it is going to output x.x.x.x/32, but we should supply\n> a function so they can get just the IP or the mask from the type. That\n> way, people who don't want the cidr format can pull out the part they\n> want.\n\nthis i don't understand. why would you want only one part of it? if you\nwant to do address arithmetic then you need specific OR and AND and NOT\nfunctions -- like making a broadcast address if all you know is your address\nand netmask. but why would you want to know the mantissa without the scale?\n\n> If they don't specify a netmask when they load the value, perhaps we use\n> the standard class A,B,C netmasks. How you specify a HOST address using\n> the non-cidr format, I really don't know. I am sure the experts will\n> hash it out before 6.4 beta on September 1.\n\nclassful assumptions are out of fashion, outdated, and dangerous. consider:\n\n\t\"16\" -> \"16/8\" -> \"16.0.0.0/8\"\n\t\"128\" -> \"128/16\" -> \"128.0.0.0/16\"\n\t\"192\" -> \"192/24\" -> \"192.0.0.0/24\"\n\nnot very helpful. the implementation of \"cidr\" that i posted here yesterday\nuses the BIND-8 functions for representational conversion. those functions\nassume that a text representation with no \"/\" given has as many bits as the\nnumber of octets they fully cover:\n\n\t\"16\" -> \"16/8\"\n\t\"128\" -> \"128/8\"\n\t\"192\" -> \"192/8\"\n\t\"127.1\" -> \"127.1/16\"\n\nthis is how a Cisco router would interpret such routes if \"ip classless\" is\nenabled and static routes were being entered. \"ip classless\" is a prereq-\nuisite for running OSPF, RIPv2, or BGP4. in other words it's pervasive.\n\nBIND follows RFC 1884 in this regard, and deviates significantly from both\nclassful assumptions and the old BSD standard, which would treat \"127.1\" as\n\"127.0.0.1\". this burned on some old /etc/rc files but it was the right\nthing to do and now that the world has gotten over the scars, let's not run\nbackwards.\n\nthe IETF's CIDR project was long running, painful, and successful.\n",
"msg_date": "Mon, 20 Jul 1998 22:07:23 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> This could clearly be a KILLER APP/TYPE for us. This is a pretty\n> sophisticated use of our type system. Indexing should present no\n> problems. We supply the comparison routines and plug them in, and the\n> optimizer automatically uses the indexes.\n\ni'd like that to be true. but the section of the manual which describes\nthis isn't as clear as the examples (the COMPLEX type in particular) in\nthe contrib/ directory at the time i started the work. figuring out what\nOID my comparison operators happened to get and plugging these values into\na PG_OPERATOR insert was just more than i could figure out how to automate.\n\nother than the OID thing i really love the postgres type system, btw, and i\ncan't see why anybody would ever use MySQL (or Oracle) unless it has the\nsame feature.\n",
"msg_date": "Mon, 20 Jul 1998 22:10:14 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> > I missed some of the earlier discussion. Is there going to be a separate\n> > IP type or is that just x.x.x.x/32? I like the idea of a host type as\n> > well. I would love to sort my IPs and have 198.96.119.99 precede\n> > 198.96.119.100.\n> \n> the ordering functions given in the implementation i posted here yesterday\n> do that, and they also show 192.5.5/24 as being \"before\" 192.5.5.0/32, which\n> is important for those of us who import routing tables into database tables.\n> \n> i don't see a need for a separate type for /32's; if someone enters just the\n> dotted quad (198.96.119.100 for example) the \"/32\" will be assumed. i'd be\n> willing to see the \"/32\" stripped off in the output function since it's a bit\n> redundant -- i didn't do that but it's out of habit rather than strong belief.\n\nThe only problem is that if we assume /32, how do we auto-netmask class\nA/B/C addresses? I guess we don't. If they want a netmask, they are\ngoing to have to specify it in cidr format.\n\nI will be honest. I always found the network/host IP address\ndistinction to be very unclearly outlined in old/non-cidr address\ndisplays, and this causes major confusion for me when trying to figure\nout how things are configured.\n\n\n> if folks really can't get behind \"CIDR\" then may i suggest \"INET\"? it's not\n> a \"NET\" or an \"IPADDR\" or \"INADDR\" or \"INNET\" or \"HOST\". it is capable of\n> representing either a network or a host, classlessly. that makes it a CIDR\n> to those in the routing or registry business. and before someone asks: no,\n> it is not IPv4-specific. my implementation encodes the address family and\n> is capable of supporting IPv6 if the \"internallength\" wants to be 13 or if\n> someone knows how to make it variable-length.\n\nI like INET too. It is up to you.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:13:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > My guess is that it is going to output x.x.x.x/32, but we should supply\n> > a function so they can get just the IP or the mask from the type. That\n> > way, people who don't want the cidr format can pull out the part they\n> > want.\n> \n> this i don't understand. why would you want only one part of it? if you\n> want to do address arithmetic then you need specific OR and AND and NOT\n> functions -- like making a broadcast address if all you know is your address\n> and netmask. but why would you want to know the mantissa without the scale?\n\nI guess I thought someone might want to have ipaddr() and netmask()\nfunctions so they can do:\n\n\tx = 192.7.34.21/24\n\tipaddr(x) -> 192.7.34.21\n\tnetmask(x) -> 255.255.255.0\n\n\tx = 192.7.0.0/16\n\tipaddr(x) -> 192.7.0.0\n\tnetmask(x) -> 255.255.0.0\n\nThese function are defined on the cidr type, and can be called if\nsomeone wants the old output format.\n\n> \n> > If they don't specify a netmask when they load the value, perhaps we use\n> > the standard class A,B,C netmasks. How you specify a HOST address using\n> > the non-cidr format, I really don't know. I am sure the experts will\n> > hash it out before 6.4 beta on September 1.\n> \n> classful assumptions are out of fashion, outdated, and dangerous. consider:\n> \n> \t\"16\" -> \"16/8\" -> \"16.0.0.0/8\"\n> \t\"128\" -> \"128/16\" -> \"128.0.0.0/16\"\n> \t\"192\" -> \"192/24\" -> \"192.0.0.0/24\"\n> \n> not very helpful. the implementation of \"cidr\" that i posted here yesterday\n> uses the BIND-8 functions for representational conversion. those functions\n> assume that a text representation with no \"/\" given has as many bits as the\n> number of octets they fully cover:\n> \n> \t\"16\" -> \"16/8\"\n> \t\"128\" -> \"128/8\"\n> \t\"192\" -> \"192/8\"\n> \t\"127.1\" -> \"127.1/16\"\n\n\n> \n> this is how a Cisco router would interpret such routes if \"ip classless\" is\n> enabled and static routes were being entered. \"ip classless\" is a prereq-\n> uisite for running OSPF, RIPv2, or BGP4. in other words it's pervasive.\n> \n> BIND follows RFC 1884 in this regard, and deviates significantly from both\n> classful assumptions and the old BSD standard, which would treat \"127.1\" as\n> \"127.0.0.1\". this burned on some old /etc/rc files but it was the right\n> thing to do and now that the world has gotten over the scars, let's not run\n> backwards.\n> \n> the IETF's CIDR project was long running, painful, and successful.\n\nYes, the 127.1 ambiguity was very strange. netstat -rn is very hard to\nunderstand using the old format.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:30:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > This could clearly be a KILLER APP/TYPE for us. This is a pretty\n> > sophisticated use of our type system. Indexing should present no\n> > problems. We supply the comparison routines and plug them in, and the\n> > optimizer automatically uses the indexes.\n> \n> i'd like that to be true. but the section of the manual which describes\n> this isn't as clear as the examples (the COMPLEX type in particular) in\n> the contrib/ directory at the time i started the work. figuring out what\n> OID my comparison operators happened to get and plugging these values into\n> a PG_OPERATOR insert was just more than i could figure out how to automate.\n\nDoing complex stuff like indexing with contrib stuff is tricky, and one\nreason we want to move stuff out of there as it becomes popular. It is\njust too hard for someone not experienced with the code to implement. \nAdd to this the fact that the oid at the time of contrib installation\nwill change every time you install it, so it is even harder/impossible\nto automate.\n\n> \n> other than the OID thing i really love the postgres type system, btw, and i\n> can't see why anybody would ever use MySQL (or Oracle) unless it has the\n> same feature.\n\nYep.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:33:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Replies to three messages here.\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 01:13:34 -0400 (EDT)\n>\n> The only problem is that if we assume /32, how do we auto-netmask class\n> A/B/C addresses? I guess we don't. If they want a netmask, they are\n> going to have to specify it in cidr format.\n\nRight. But read on -- what you're calling a netmask is really a \nprefix length, and I think there's some confusion as to what it is.\n\n> I will be honest. I always found the network/host IP address\n> distinction to be very unclearly outlined in old/non-cidr address\n> displays, and this causes major confusion for me when trying to figure\n> out how things are configured.\n\nMe too.\n\n> I like INET too. It is up to you.\n\nHow do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\nmake it work (either by making internal_length=10 or going variable length)\nor (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n\nI've started to wonder if we ought to call the type INET and limit it to V4.\n(In the C socket bindings, IPv6 addresses are in_addr6 / sockaddr_in6, and\nthe address family is AF_INET6 -- I don't know whether to plan on reflecting\nthis in the postgres types, i.e., use a separate one for IPv6, or not.)\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 01:30:05 -0400 (EDT)\n>\n> > ... but why would you want to know the mantissa without the scale?\n> \n> I guess I thought someone might want to have ipaddr() and netmask()\n> functions so they can do:\n> \n> \tx = 192.7.34.21/24\n> \tipaddr(x) -> 192.7.34.21\n> \tnetmask(x) -> 255.255.255.0\n\nThis is the downreference from above. It does not work that way. /24 is\nnot a shorthand for specifying a netmask -- in CIDR, it's a \"prefix length\".\nThat means \"192.7.34.21/24\" is either (a) a syntax error or (b) equivilent\nto \"192.7.34/24\".\n\nBtw, it appears from my research that the BIND functions *do* impute a \"class\"\nif (a) no \"/width\" is specified and (b) the classful interpretation would be\nlonger than the classless interpretation. No big deal but it qualifies \nsomething I said earlier so I thought I'd mention it.\n\n> \tx = 192.7.0.0/16\n> \tipaddr(x) -> 192.7.0.0\n> \tnetmask(x) -> 255.255.0.0\n> \n> These function are defined on the cidr type, and can be called if\n> someone wants the old output format.\n\nCan we wait and see if someone misses / asks for these before we make them?\n\n> ..., the 127.1 ambiguity was very strange. netstat -rn is very hard to\n> understand using the old format.\n\nI was amazed at the number of people who had hardwired \"127.1\" though :-(.\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 01:33:41 -0400 (EDT)\n>\n> Doing complex stuff like indexing with contrib stuff is tricky, and one\n> reason we want to move stuff out of there as it becomes popular. It is\n> just too hard for someone not experienced with the code to implement. \n> Add to this the fact that the oid at the time of contrib installation\n> will change every time you install it, so it is even harder/impossible\n> to automate.\n\nPerhaps we ought to make new type insertion easier since it's so cool?\n",
"msg_date": "Mon, 20 Jul 1998 23:44:04 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Mon, 20 Jul 1998, Paul A Vixie wrote:\n\n> > Also, I would assume we can handle old-style non-cidr address just as\n> > cleanly, so both cidr and non-cidr can use the same type and functions.\n> \n> the implementation i sent around yesterday does this just fine. or rather\n> it makes useful assumptions if no \"/\" is given, and it always prints the \"/\".\n\n\tDoes anyone have any objections to using Paul's implementation as\n\"the base implementation\", to be inserted into the main stream code now,\nand built up from there? \n\n\tAssuming no objections, Paul...can you get your implementation\nmerged into the 'main stream code' vs 'contrib' and submit an appropriate\npatch for it? The sooner we get it into the main stream, the sooner more\nppl are playing with it, testing it, and suggesting/submitting changes to\nit...\n\n\tAnd the type is to be a 'CIDR', which is the appropriate\nterminology for what it is...those that need it, will know what it is\n*shrug*\n\n\t\n\t\n\n\n",
"msg_date": "Tue, 21 Jul 1998 07:59:53 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Tue, 21 Jul 1998, Bruce Momjian wrote:\n\n> > if folks really can't get behind \"CIDR\" then may i suggest \"INET\"? it's not\n> > a \"NET\" or an \"IPADDR\" or \"INADDR\" or \"INNET\" or \"HOST\". it is capable of\n> > representing either a network or a host, classlessly. that makes it a CIDR\n> > to those in the routing or registry business. and before someone asks: no,\n> > it is not IPv4-specific. my implementation encodes the address family and\n> > is capable of supporting IPv6 if the \"internallength\" wants to be 13 or if\n> > someone knows how to make it variable-length.\n> \n> I like INET too. It is up to you\n\n\tI'm sticking to this one like glue...the proper terminology is a\nCIDR...using anything else would be tailoring to \"those that don't want to\nknow better\", which I believe is the business Micro$loth is in, no? \n\n\tIf you don't know what a CIDR is, you probably shouldn't be using\nit and should get out of networking...\n\n\n",
"msg_date": "Tue, 21 Jul 1998 08:04:56 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Paul A Vixie wrote:\n\n> > I like INET too. It is up to you.\n> \n> How do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\n> make it work (either by making internal_length=10 or going variable length)\n> or (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n\n\tNot sure about b, but doesn't FreeBSD (at least) already support\nIPv6? If so, I imagine that Linux does too? How much \"later\" are we\ntalking about here? \n\n\tI'm sorry, but the IPv4 vs IPv6 issue hasnt' been something I've\nfollowed much, so don't know the differences...:(\n\n\n",
"msg_date": "Tue, 21 Jul 1998 08:11:47 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Mon, 20 Jul 1998, Paul A Vixie wrote:\n> i don't see a need for a separate type for /32's; if someone enters just the\n> dotted quad (198.96.119.100 for example) the \"/32\" will be assumed. i'd be\n> willing to see the \"/32\" stripped off in the output function since it's a bit\n> redundant -- i didn't do that but it's out of habit rather than strong belief.\n\nI don't see a problem with having a separate type for /32's. It doesn't\nhurt anything, and it takes up less room that a CIDR. When you've got\nseveral million records this becomes an issue. (Not from a perspective of\nspace, but more data requires more time to muck through during queries.)\n\nPlus, it would enable me to use my existing data without reloading.\n(ignoring the fact that 6.4 will probably require this.)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n",
"msg_date": "Tue, 21 Jul 1998 10:00:03 -0400 (EDT)",
"msg_from": "\"Matthew N. Dodd\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "Paul A Vixie wrote:\n> > I like INET too. It is up to you.\n> \n> How do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\n> make it work (either by making internal_length=10 or going variable length)\n> or (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n\nMaking it IPv4 only just means we'll have to do it again later, and having\nIPv6 functionality now would be good for those of us who are currently working\nwith IPv6 networks...\n\nNick Bastin\nSystems Administrator\nRBb Systems\n",
"msg_date": "Tue, 21 Jul 1998 10:35:21 -0400",
"msg_from": "Nick Bastin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> > I like INET too. It is up to you.\n> \n> How do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\n> make it work (either by making internal_length=10 or going variable length)\n> or (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n\nI say stick with IPv4 at this point. We can always change it in future\nupgrades. dump/reload will handle any changes in the internal format.\n\n> \n> I've started to wonder if we ought to call the type INET and limit it to V4.\n> (In the C socket bindings, IPv6 addresses are in_addr6 / sockaddr_in6, and\n> the address family is AF_INET6 -- I don't know whether to plan on reflecting\n> this in the postgres types, i.e., use a separate one for IPv6, or not.)\n\nWe can call it INET now, and change it to INET4/INET6 if we decide we\nwant separate types for the two address types.\n\n> \n> > From: Bruce Momjian <[email protected]>\n> > Date: Tue, 21 Jul 1998 01:30:05 -0400 (EDT)\n> >\n> > > ... but why would you want to know the mantissa without the scale?\n> > \n> > I guess I thought someone might want to have ipaddr() and netmask()\n> > functions so they can do:\n> > \n> > \tx = 192.7.34.21/24\n> > \tipaddr(x) -> 192.7.34.21\n> > \tnetmask(x) -> 255.255.255.0\n> \n> This is the downreference from above. It does not work that way. /24 is\n> not a shorthand for specifying a netmask -- in CIDR, it's a \"prefix length\".\n> That means \"192.7.34.21/24\" is either (a) a syntax error or (b) equivilent\n> to \"192.7.34/24\".\n\nHow do we store the netmask? Is that a separate field?\n\n> \n> Btw, it appears from my research that the BIND functions *do* impute a \"class\"\n> if (a) no \"/width\" is specified and (b) the classful interpretation would be\n> longer than the classless interpretation. No big deal but it qualifies \n> something I said earlier so I thought I'd mention it.\n> \n> > \tx = 192.7.0.0/16\n> > \tipaddr(x) -> 192.7.0.0\n> > \tnetmask(x) -> 255.255.0.0\n> > \n> > These function are defined on the cidr type, and can be called if\n> > someone wants the old output format.\n> \n> Can we wait and see if someone misses / asks for these before we make them?\n\nSuppose I want to retrieve only 'host' addresses. How do we do that?\n\n> > Doing complex stuff like indexing with contrib stuff is tricky, and one\n> > reason we want to move stuff out of there as it becomes popular. It is\n> > just too hard for someone not experienced with the code to implement. \n> > Add to this the fact that the oid at the time of contrib installation\n> > will change every time you install it, so it is even harder/impossible\n> > to automate.\n> \n> Perhaps we ought to make new type insertion easier since it's so cool?\n\nYep, it is cool. When the code is installed as a standard part of the\nbackend, you have more facilities to install types. There are examples\nof many other types in the include/catalog/*.h files, so you just pick\none and duplicate the proper partsTrying to do that with an SQL\nstatement is really messy, particularly because the standard types DON'T\nuse SQL to install themselves. You also must specify unique OIDs for\nthese new entries. Also, the terminology is not something that many\npeople are familiar with, so a lot of it is having the user understand\nwhat they need to do. The manuals do a pretty good job. If you have\nany specific ideas, or things that got you confused that we should\nclearify, please let us know.\n\nFortunately, there are only a few types in the /contrib area, and as you\nhave learned. As people find the types useful, we want to move them\ninto the main source.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 10:40:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> On Mon, 20 Jul 1998, Paul A Vixie wrote:\n> \n> > > Also, I would assume we can handle old-style non-cidr address just as\n> > > cleanly, so both cidr and non-cidr can use the same type and functions.\n> > \n> > the implementation i sent around yesterday does this just fine. or rather\n> > it makes useful assumptions if no \"/\" is given, and it always prints the \"/\".\n> \n> \tDoes anyone have any objections to using Paul's implementation as\n> \"the base implementation\", to be inserted into the main stream code now,\n> and built up from there? \n\n\nIt is already being worked on by one of the ip_and_mac developers. He\nis merging the types. We need him to work on it because he understands\nPostgreSQL better.\n\n> \n> \tAssuming no objections, Paul...can you get your implementation\n> merged into the 'main stream code' vs 'contrib' and submit an appropriate\n> patch for it? The sooner we get it into the main stream, the sooner more\n> ppl are playing with it, testing it, and suggesting/submitting changes to\n> it...\n\nAgain, it is being worked on. I don't think Paul wants to get into\ninstalling in into the main tree. It is quite a job. We may need to\nincrease the max system oid to get us more available oids.\n\n> \tAnd the type is to be a 'CIDR', which is the appropriate\n> terminology for what it is...those that need it, will know what it is\n> *shrug*\n\nI use IP addresses and didn't know. I am also hoping we can allow\nstorage of old and cidr types in the same type, at least superficially.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 10:45:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "\"Matthew N. Dodd\" <[email protected]> writes:\n> Plus, it would enable me to use my existing data without reloading.\n> (ignoring the fact that 6.4 will probably require this.)\n\n6.4 definitely will require a database reload, so as long as the\nexternal representations are compatible this isn't a good argument\nfor a separate /32 type.\n\nThe space issue might be something to think about. But I'm inclined\nto think that we should build in IPv6 support from the get-go, rather\nthan have to add it later. We ought to try to be ahead of the curve\nnot behind it. So it's gonna be more than 4 bytes/entry anyway.\n\nWould it make sense to use atttypmod to distinguish several different\nsubtypes of CIDR? \"4 bytes\", \"4 bytes + mask\", \"6 bytes\", \"6 bytes\n+ mask\" seem like interesting possibilities.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Jul 1998 10:46:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "\nOn Tue, 21 Jul 1998, The Hermit Hacker wrote:\n\n> On Mon, 20 Jul 1998, Paul A Vixie wrote:\n> \n> > > I like INET too. It is up to you.\n> > \n> > How do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\n> > make it work (either by making internal_length=10 or going variable length)\n> > or (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n> \n> \tNot sure about b, but doesn't FreeBSD (at least) already support\n> IPv6? If so, I imagine that Linux does too? How much \"later\" are we\n> talking about here? \n> \n> \tI'm sorry, but the IPv4 vs IPv6 issue hasnt' been something I've\n> followed much, so don't know the differences...:(\n\nWhy not two types, cidr and cidr6? There's more than one type of int,\nfloat, etc...\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Jul 1998 10:51:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> On Mon, 20 Jul 1998, Paul A Vixie wrote:\n> > i don't see a need for a separate type for /32's; if someone enters just the\n> > dotted quad (198.96.119.100 for example) the \"/32\" will be assumed. i'd be\n> > willing to see the \"/32\" stripped off in the output function since it's a bit\n> > redundant -- i didn't do that but it's out of habit rather than strong belief.\n> \n> I don't see a problem with having a separate type for /32's. It doesn't\n> hurt anything, and it takes up less room that a CIDR. When you've got\n> several million records this becomes an issue. (Not from a perspective of\n> space, but more data requires more time to muck through during queries.)\n\nI would like one type, and we can specifiy a way of pulling out just\nhosts or class addresses.\n\n> \n> Plus, it would enable me to use my existing data without reloading.\n> (ignoring the fact that 6.4 will probably require this.)\n\nYep.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:01:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Paul A Vixie wrote:\n> > > I like INET too. It is up to you.\n> > \n> > How do folks feel about polymorphism between IPv4 and IPv6? Should we (a)\n> > make it work (either by making internal_length=10 or going variable length)\n> > or (b) just make this thing IPv4 only and take care of IPv6 separately/later?\n> \n> Making it IPv4 only just means we'll have to do it again later, and having\n> IPv6 functionality now would be good for those of us who are currently working\n> with IPv6 networks...\n\nOh. OK. We do have variable-length types.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:02:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> \"Matthew N. Dodd\" <[email protected]> writes:\n> > Plus, it would enable me to use my existing data without reloading.\n> > (ignoring the fact that 6.4 will probably require this.)\n> \n> 6.4 definitely will require a database reload, so as long as the\n> external representations are compatible this isn't a good argument\n> for a separate /32 type.\n> \n> The space issue might be something to think about. But I'm inclined\n> to think that we should build in IPv6 support from the get-go, rather\n> than have to add it later. We ought to try to be ahead of the curve\n> not behind it. So it's gonna be more than 4 bytes/entry anyway.\n> \n> Would it make sense to use atttypmod to distinguish several different\n> subtypes of CIDR? \"4 bytes\", \"4 bytes + mask\", \"6 bytes\", \"6 bytes\n> + mask\" seem like interesting possibilities.\n\nYes, that is the proper way to go, though atttypmod is something on\ncolumn, not on each data row. It is specified when the column is\ncreated.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:03:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Would it make sense to use atttypmod to distinguish several different\n>> subtypes of CIDR? \"4 bytes\", \"4 bytes + mask\", \"6 bytes\", \"6 bytes\n>> + mask\" seem like interesting possibilities.\n\n> Yes, that is the proper way to go, though atttypmod is something on\n> column, not on each data row. It is specified when the column is\n> created.\n\nRight, that's what I had in mind. If you *know* that every entry in\nyour table only needs IPv4, you can specify that when making the table\nand save a couple of bytes per entry.\n\nThe alternative solution is to make CIDR a variable-length type, but\nI think the overhead of that would be as much or more than the possible\nsavings, no?\n\nI don't know whether having multiple top-level types would be better\nor worse than one type with a subtype code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Jul 1998 11:18:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Would it make sense to use atttypmod to distinguish several different\n> >> subtypes of CIDR? \"4 bytes\", \"4 bytes + mask\", \"6 bytes\", \"6 bytes\n> >> + mask\" seem like interesting possibilities.\n> \n> > Yes, that is the proper way to go, though atttypmod is something on\n> > column, not on each data row. It is specified when the column is\n> > created.\n> \n> Right, that's what I had in mind. If you *know* that every entry in\n> your table only needs IPv4, you can specify that when making the table\n> and save a couple of bytes per entry.\n> \n> The alternative solution is to make CIDR a variable-length type, but\n> I think the overhead of that would be as much or more than the possible\n> savings, no?\n> \n> I don't know whether having multiple top-level types would be better\n> or worse than one type with a subtype code.\n\nThe byte size is really not an issue to me. You can do ip6 and still\nput it in eight bytes. If you make it a variable-lengh type, you have\nthe length on each field, and that is four bytes right there, so you are\nbetter doing eight bytes from the start.\n\n\tip4\t5 btyes(4 + precision)\n\tip6\t7 bytes(6 + precision)\n\nIf you want ip6 now, just take eight bytes and make it a fixed length. \nThe backend it going to round the disk storage of 5 bytes up to eight\nanyway, unless the next field is int2 or char1.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:45:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr'"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > \tAnd the type is to be a 'CIDR', which is the appropriate\n> > terminology for what it is...those that need it, will know what it is\n> > *shrug*\n> \n> I use IP addresses and didn't know. I am also hoping we can allow\n> storage of old and cidr types in the same type, at least superficially.\n\nAs I said in another message, the old types are simply special cases of\nCIDR so it is already allowed.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 21 Jul 1998 12:07:32 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Replies to 5 messages contained below.\n\n> Date: Tue, 21 Jul 1998 10:00:03 -0400 (EDT)\n> From: \"Matthew N. Dodd\" <[email protected]>\n>\n> I don't see a problem with having a separate type for /32's. It doesn't\n> hurt anything, and it takes up less room that a CIDR. When you've got\n> several million records this becomes an issue. (Not from a perspective of\n> space, but more data requires more time to muck through during queries.)\n> \n> Plus, it would enable me to use my existing data without reloading.\n> (ignoring the fact that 6.4 will probably require this.)\n\nIt's a tradeoff. If one byte of prefix-length which adds 2MB of storage to\na 2-million record table (which probably takes 3GB to store anyway due to\nthe other fields and the metadata) is too much, then let's make a separate\ntype for hosts as you suggest. But we're headed down an icky sticky path,\nwhich is separate types for IPv4 hosts, IPv4 CIDR blocks which can be hosts,\nIPv6 hosts, and IPv6 CIDR blocks which can be hosts. This seems too sticky\nand too icky for me -- I prefer polymorphism in 4GL's since I want to talk\nabout what I mean and let the computer figure out how to represent/store it.\n\nIn that sense I would argue for a variable width \"int\" type rather than a\nbunch of different \"int2\", \"int4\" etc types. (Too late, I know.) Though\nin the case of IPv6 I don't think enough is yet known about address formats\n(RFC 1884 was for example just rewritten, and I'm not sure the IETF is done\nmessing with that stuff given what I know about the DNAME plans) and so I'd\nargue that putting the address family into the internal representation and\nthen not supporting anything but IPv4 at this time -- basically what the \nimplementation I posted here two days ago does -- is the practical short\nterm thing to do.\n\n> Date: Tue, 21 Jul 1998 10:35:21 -0400\n> From: Nick Bastin <[email protected]>\n>\n> Making it IPv4 only just means we'll have to do it again later, and having\n> IPv6 functionality now would be good for those of us who are currently\n> working with IPv6 networks...\n\nThat's either an argument for implementing an IPv6 type immediately, or an\nargument for polymorphism in a single AF-independent type. Can you be more\nspecific? My view of IPv6, as expressed above, is \"let's leave room in the\ntype's internal representation but otherwise not worry about IPv6 right now.\"\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 10:40:58 -0400 (EDT)\n> \n> I say stick with IPv4 at this point. We can always change it in future\n> upgrades. dump/reload will handle any changes in the internal format.\n\nAs expressed above, I agree with this viewpoint.\n\n> We can call it INET now, and change it to INET4/INET6 if we decide we\n> want separate types for the two address types.\n\nI can live with this approach, choking up no hairballs at all.\n\n> > That means \"192.7.34.21/24\" is either (a) a syntax error or (b) equivilent\n> > to \"192.7.34/24\".\n> \n> How do we store the netmask? Is that a separate field?\n\nThere is no netmask. In CIDR notation the \"/nn\" suffix just tells you how\nto interpret the mantissa if it does not fall on an octet boundary. Therefore\n\"204.152.184/21\" is three bits shorter than \"204.152.184\". There is no \nprovision in the CIDR universe for expressing a \"netmask\" since that would\nbe a mantissa longer than its \"prefix\". CIDR is all about prefixes, it's\nnot just a shorthand for aggregating an <address,netmask> pair. I can see\nwhy you'd like to be able to use it as an aggregated <address,netmask> pair\nbut (a) that's not what it is and (b) this is not the time or place to invent\nsomething new in the CIDR field -- that sort of work would and should begin\nwith an Internet Draft in the appropriate working group.\n\n> > Can we wait and see if someone misses / asks for these before we make them?\n> \n> Suppose I want to retrieve only 'host' addresses. How do we do that?\n\nThere's no way to do that with the type I posted here the other day. There'd\nbe no problem adding the function you proposed, like LENGTH(cidr), and then\ndoing a SELECT...WHERE using that function -- but it would be an iterative\nsearch, there's no way I can think of for the query optimizer to build up the\nimplicit trie you'd need to go directly to all prefixes of a certain length.\n\n> > Perhaps we ought to make new type insertion easier since it's so cool?\n> \n> Yep, it is cool. When the code is installed as a standard part of the\n> backend, you have more facilities to install types. There are examples\n> of many other types in the include/catalog/*.h files, so you just pick\n> one and duplicate the proper partsTrying to do that with an SQL\n> statement is really messy, particularly because the standard types DON'T\n> use SQL to install themselves. You also must specify unique OIDs for\n> these new entries. Also, the terminology is not something that many\n> people are familiar with, so a lot of it is having the user understand\n> what they need to do. The manuals do a pretty good job. If you have\n> any specific ideas, or things that got you confused that we should\n> clearify, please let us know.\n\n>From a marketing standpoint, if user defined types are one of PostgreSQL's\nunique features, then they ought to be so easy to add that we have hundreds\nof them in ./contrib at any given time. This means that most of the standard\nones should be installed using whatever technology is used to install new\ncontributed ones: because it will force that installation process to become\neasier. There's no reason to avoid new syntax for this since it's a new\nthing -- if we can do CREATE FUNCTION and CREATE TYPE and CREATE OPERATOR\nthen why not CREATE BINDING to just associate various functions, by name\nrather than by OID, with the magic index operator slots for that type? Or\neven extending the CREATE TYPE syntax to bind the indexing functions to the\ntype at the time of its creation?\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 10:45:45 -0400 (EDT)\n> \n> It is already being worked on by one of the ip_and_mac developers. He\n> is merging the types. We need him to work on it because he understands\n> PostgreSQL better.\n\nSounds great, I'll wait to hear from that person (if my help is needed.)\n\n> Again, it is being worked on. I don't think Paul wants to get into\n> installing in into the main tree. It is quite a job. We may need to\n> increase the max system oid to get us more available oids.\n\nActually I would have dived into this and thought it was fun, but doubtless\nI would not have done as good or as quick a job as the ip_and_mac guys.\n\n> > \tAnd the type is to be a 'CIDR', which is the appropriate\n> > terminology for what it is...those that need it, will know what it is\n> > *shrug*\n> \n> I use IP addresses and didn't know. I am also hoping we can allow\n> storage of old and cidr types in the same type, at least superficially.\n\nSounds like conclusive evidence for calling this the INET type rather than\nthe CIDR type. And if someone wants to make an INET32 type to account for\nthe case of millions of host-only (no prefix length needed) fields, so be it.\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 11:01:10 -0400 (EDT)\n> \n> > I don't see a problem with having a separate type for /32's. It doesn't\n> > hurt anything, and it takes up less room that a CIDR. When you've got\n> > several million records this becomes an issue. (Not from a perspective of\n> > space, but more data requires more time to muck through during queries.)\n> \n> I would like one type, and we can specifiy a way of pulling out just\n> hosts or class addresses.\n\nI also lean significantly in the direction of a single type for all of IPv4\nrather than a separate INET32 type. \n\n> > Plus, it would enable me to use my existing data without reloading.\n> > (ignoring the fact that 6.4 will probably require this.)\n\nI think there's no way to justify permanent engineering decisions on the basis\nof a single reload operation or the avoidance of one.\n\n> Yep.\n\n> From: Bruce Momjian <[email protected]>\n> Date: Tue, 21 Jul 1998 11:02:11 -0400 (EDT)\n> \n> > Making it IPv4 only just means we'll have to do it again later, and having\n> > IPv6 functionality now would be good for those of us who are currently\n> > working with IPv6 networks...\n> \n> Oh. OK. We do have variable-length types.\n\nMy question is, do those types have an internal framing format with an outer\nlength and an inner opaque structure, or does each one have a \"length\"\naccessor to which an opaque structure is passed? In the former case, we'll\nbe burning more space on a length indicator even though the address family\nand prefix length are in the opaque part of the structure. In the latter\ncase, there's no big deal at all since given the address family and prefix\nlength, an accessor can tell the type system the size of a particular datum.\n",
"msg_date": "Tue, 21 Jul 1998 13:40:34 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Tue, 21 Jul 1998, Paul A Vixie wrote:\n\n> > > \tAnd the type is to be a 'CIDR', which is the appropriate\n> > > terminology for what it is...those that need it, will know what it is\n> > > *shrug*\n> > \n> > I use IP addresses and didn't know. I am also hoping we can allow\n> > storage of old and cidr types in the same type, at least superficially.\n\nI believe this underscores Marc's point, which is all the more reason to\ncall it what it is, \"cidr\" not some other term only used to schmooze\nsomeone's ignorance to the proper terminology.\n\n> Sounds like conclusive evidence for calling this the INET type rather than\n> the CIDR type. And if someone wants to make an INET32 type to account for\n> the case of millions of host-only (no prefix length needed) fields, so be it.\n\nYou were right the first time Paul, stick with cidr.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Jul 1998 17:02:49 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "> My question is, do those types have an internal framing format with an outer\n> length and an inner opaque structure, or does each one have a \"length\"\n> accessor to which an opaque structure is passed? In the former case, we'll\n> be burning more space on a length indicator even though the address family\n> and prefix length are in the opaque part of the structure. In the latter\n> case, there's no big deal at all since given the address family and prefix\n> length, an accessor can tell the type system the size of a particular datum.\n\nThe length is on every field. atttypmod is a fixed value stored in the\nattribute table, and is used to modify the handling of all value in that\ncolumn. We currently only use it for char(3)/varchar(30), etc.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 17:09:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> On Tue, 21 Jul 1998, Paul A Vixie wrote:\n> \n> > > > \tAnd the type is to be a 'CIDR', which is the appropriate\n> > > > terminology for what it is...those that need it, will know what it is\n> > > > *shrug*\n> > > \n> > > I use IP addresses and didn't know. I am also hoping we can allow\n> > > storage of old and cidr types in the same type, at least superficially.\n> \n> I believe this underscores Marc's point, which is all the more reason to\n> call it what it is, \"cidr\" not some other term only used to schmooze\n> someone's ignorance to the proper terminology.\n\n\n> \n> > Sounds like conclusive evidence for calling this the INET type rather than\n> > the CIDR type. And if someone wants to make an INET32 type to account for\n> > the case of millions of host-only (no prefix length needed) fields, so be it.\n> \n> You were right the first time Paul, stick with cidr.\n\nI think we have to be able to store both old-style and cidr-style\naddresses for several reasons:\n\n\twe have current users of ip_and_mac\n\tsome people don't use cidr yet\n\twe need to be able to store netmasks too, which aren't cidr\n\nSo a generic INET type is clearer, and will support both address types.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 17:48:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> I think we have to be able to store both old-style and cidr-style\n> addresses for several reasons:\n\nI must be missing something. Can you suggest a classfull network\ndesignation that doesn't fit into the CIDR system? For example,\nwhat's the difference between the following networks?\n\n Class \"A\" network 10\n 10.0.0.0 mask 255.0.0.0\n 10.0.0.0/8\n\nDon't they all refer to exactly the same thing? If you subnet that\nnetwork into 256 equal subnets you might have this instead.\n\n Class \"B\" network 10.42\n 10.42.0.0 mask 255.255.0.0\n 10.42.0.0/16\n\nNow that first one is an invalid designation in the old classfull system\nso it doesn't matter if you can specify it. Under CIDR, however, that\nsubnet is perfectly valid (except that that particular range won't route\non the Internet) and the designations work. So why not store the old\nclassfull networks in the cidr type? They fit just fine.\n\n> \twe have current users of ip_and_mac\n\nI don't know enough about this type but other than a different name, how\ncan expanding the range of allowable values limit them?\n\n> \tsome people don't use cidr yet\n\nName one. They may not know what it is called but very little software\nor hardware still supports classes. Do Macs still force the distinction?\nIn any case, class networks fit in CIDR.\n\n> \twe need to be able to store netmasks too, which aren't cidr\n\nNow this is an issue but it is the same issue as hosts. Netmasks\ncan also be designated as /32. However, if all you want to store\nis the netmask, just use int. The range is 0 to 32.\n\n> So a generic INET type is clearer, and will support both address types.\n\nI have no particular problem with calling it INET instead of CIDR if\nthat gets the type into the system but let's be clear that either way,\nany host and netmask combination can be stored whether it fits in\nthe old class system or not.\n\nPerhaps there is an underlying difference of assumptions about what\nthe actual type is. Let me take a stab at defining it (without\nnaming it) and see if we're all on the same bus.\n\nI see the underlying data type storing two things, a host address\n(which can hold an IPv4 or IPv6 IP) and a netmask which can be\nstored as a small int, 8 bits is plenty. The input function would\nread IP numbers as follows (I'm making some of this up as I go.)\n\n x.x.x.x/y IP x.x.x.x with masklen y\n x.x.x.x:y.y.y.y IP x.x.x.x with masklen determined by examining\n y.y.y.y raising an exception if it is an invalid\n mask (defined as all ones followed by all zeroes)\n x.x.x.x IP x.x.x.x masklen of 32\n\nThe output functions would print in a standard way, possibly allowing\nalternate representations like we do for money. Also, there would\nbe functions to extract the host, the network or the netmask.\n\nIs this close to what everyone thinks or are we talking about completely\ndifferent things?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 21 Jul 1998 22:43:19 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Just voicing enthusiasm for the \"cidr\" thread:\n\nBruce is right on when he refers to this as \"KILLER APP\" - PostgreSQL\nis a great tool for running administering a network. (I contributed a\ncouple smallish functions about a year ago to do simple address formatting\nand in-subnet testing.) Especially when you mix in CVS for version\ncontrolling config files and rsync+ssh for pushing them.\n\nMy preference is CIDR over INET or anything else for the type name.\n\nLet's get IPv6 in while Paul is focused on us. Vixie's input is essential\nfor keeping us On The Right Track with this thing - I'd give him 100 votes\nand the rest of us 1 each in all the debates. :-)\n\nA data type is much more useful if it has enough supporting functions -\nI too like the idea of a built-in (quasi-standard) way of extracting\nhost and class. The easier we make the type to use, the more people will\nuse it.\n\nI think this is just the tip of the iceberg (others have hinted at this\ntoo). PostgreSQL + CVS + rsync/ssh + apache makes one powerful net admin\nsystem, but it's a tool chest with just the nuts and bolts. As we use\nthe new CIDR datatype, I hope we'll evolve a good, general set of tools\naround the network database. \n",
"msg_date": "Tue, 21 Jul 1998 23:35:40 -0500",
"msg_from": "Hal Snyder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Tue, 21 Jul 1998, Tom Lane wrote:\n\n> \"Matthew N. Dodd\" <[email protected]> writes:\n> > Plus, it would enable me to use my existing data without reloading.\n> > (ignoring the fact that 6.4 will probably require this.)\n> \n> 6.4 definitely will require a database reload, so as long as the\n> external representations are compatible this isn't a good argument\n> for a separate /32 type.\n> \n> The space issue might be something to think about. But I'm inclined\n> to think that we should build in IPv6 support from the get-go, rather\n> than have to add it later. We ought to try to be ahead of the curve\n> not behind it. So it's gonna be more than 4 bytes/entry anyway.\n\n\tI have to agree here...being able to say we support a CIDR type is\none thing, but able to say we support IPv6 is, IMHO, a big thing...\n\n\n",
"msg_date": "Wed, 22 Jul 1998 08:13:02 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "On Tue, 21 Jul 1998, Bruce Momjian wrote:\n\n> I think we have to be able to store both old-style and cidr-style\n> addresses for several reasons:\n> \n> \twe have current users of ip_and_mac\n> \tsome people don't use cidr yet\n> \twe need to be able to store netmasks too, which aren't cidr\n> \n> So a generic INET type is clearer, and will support both address types.\n\t\n\tI do not agree ... an INET type is clearer only for those that\ndon't know better, so we're now promoting ignorance of proper terminology? \nWe have everything else 'explained' in our man pages:\n\n char(n) character(n) fixed-length character string\n varchar(n) character varying(n) variable-length character string\n\n\tSo, having:\n\n cidr\t\tn/a\t\t\tIPv4 addressing\n cidr6\t\tn/a\t\t\tIPv6 addressing\n\n\tIs not unreasonable...\n\n\tMis-naming it INET and INET6, IMHO, is unreasonable, since that is\nnot what they are...\n\n\n",
"msg_date": "Wed, 22 Jul 1998 09:23:17 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > I think we have to be able to store both old-style and cidr-style\n> > addresses for several reasons:\n> \n> I must be missing something. Can you suggest a classfull network\n> designation that doesn't fit into the CIDR system? For example,\n> what's the difference between the following networks?\n> \n> Class \"A\" network 10\n> 10.0.0.0 mask 255.0.0.0\n> 10.0.0.0/8\n> \n> Don't they all refer to exactly the same thing? If you subnet that\n> network into 256 equal subnets you might have this instead.\n> \n> Class \"B\" network 10.42\n> 10.42.0.0 mask 255.255.0.0\n> 10.42.0.0/16\n> \n> Now that first one is an invalid designation in the old classfull system\n> so it doesn't matter if you can specify it. Under CIDR, however, that\n> subnet is perfectly valid (except that that particular range won't route\n> on the Internet) and the designations work. So why not store the old\n> classfull networks in the cidr type? They fit just fine.\n\n\nOK, let me explain what I think Paul was saying. cidr is used for\nnetworks. You can use it for hosts by specifying /32. It is not the\nsame as a netmask. For example:\n\n\thost\t192.24.45.32\n\nNow, this is a host address. We can say its netmask is 255.255.255.0,\nor was can say it is part of network 192.24.45/24, which would allow you\ncompute the netmask as 255.255.255.0. The problem is that you need the\ntype to support cidr, hosts, and netmasks.\n\nMy idea is to internally store the new type as 8 bytes:\n\n\t____ ____ ____ ____ ____ ___ ___ ____\n\tcidr addr x . x . x . x ip6 ip6\n\tbits len\n\nThat way, if they specify cidr bits, we store it. If they don't we make\nthe bits field equal -1, and print/sort appropriately. The addr len is\nusually 3, but ip6 is also easy to add by making the addr len equal 6.\n\n> > \twe need to be able to store netmasks too, which aren't cidr\n> \n> Now this is an issue but it is the same issue as hosts. Netmasks\n> can also be designated as /32. However, if all you want to store\n> is the netmask, just use int. The range is 0 to 32.\n> \n> > So a generic INET type is clearer, and will support both address types.\n> \n> I have no particular problem with calling it INET instead of CIDR if\n> that gets the type into the system but let's be clear that either way,\n> any host and netmask combination can be stored whether it fits in\n> the old class system or not.\n> \n> Perhaps there is an underlying difference of assumptions about what\n> the actual type is. Let me take a stab at defining it (without\n> naming it) and see if we're all on the same bus.\n> \n> I see the underlying data type storing two things, a host address\n> (which can hold an IPv4 or IPv6 IP) and a netmask which can be\n> stored as a small int, 8 bits is plenty. The input function would\n> read IP numbers as follows (I'm making some of this up as I go.)\n> \n> x.x.x.x/y IP x.x.x.x with masklen y\n> x.x.x.x:y.y.y.y IP x.x.x.x with masklen determined by examining\n> y.y.y.y raising an exception if it is an invalid\n> mask (defined as all ones followed by all zeroes)\n> x.x.x.x IP x.x.x.x masklen of 32\n> \n> The output functions would print in a standard way, possibly allowing\n> alternate representations like we do for money. Also, there would\n> be functions to extract the host, the network or the netmask.\n> \n> Is this close to what everyone thinks or are we talking about completely\n> different things?\n\nAgain, not sure we want to merge address and netmask for hosts in the\nsame field.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 10:46:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> On Tue, 21 Jul 1998, Bruce Momjian wrote:\n> \n> > I think we have to be able to store both old-style and cidr-style\n> > addresses for several reasons:\n> > \n> > \twe have current users of ip_and_mac\n> > \tsome people don't use cidr yet\n> > \twe need to be able to store netmasks too, which aren't cidr\n> > \n> > So a generic INET type is clearer, and will support both address types.\n> \t\n> \tI do not agree ... an INET type is clearer only for those that\n> don't know better, so we're now promoting ignorance of proper terminology? \n> We have everything else 'explained' in our man pages:\n> \n> char(n) character(n) fixed-length character string\n> varchar(n) character varying(n) variable-length character string\n> \n> \tSo, having:\n> \n> cidr\t\tn/a\t\t\tIPv4 addressing\n> cidr6\t\tn/a\t\t\tIPv6 addressing\n> \n> \tIs not unreasonable...\n> \n> \tMis-naming it INET and INET6, IMHO, is unreasonable, since that is\n> not what they are...\n\nSee my earlier post, and discussion with Paul. cidr is just networks,\nand hosts and netmasks will require non-cidr storage.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 10:54:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "On Wed, 22 Jul 1998, Bruce Momjian wrote:\n\n> OK, let me explain what I think Paul was saying. cidr is used for\n> networks. You can use it for hosts by specifying /32. It is not the\n> same as a netmask. For example:\n> \n> \thost\t192.24.45.32\n> \n> Now, this is a host address. We can say its netmask is 255.255.255.0,\n> or was can say it is part of network 192.24.45/24, which would allow you\n> compute the netmask as 255.255.255.0. The problem is that you need the\n> type to support cidr, hosts, and netmasks.\n\n\t192.24.45.32/32 == 192.24.45.32:255.255.255.255 (single host)\n\t192.24.45.32/30 == 192.24.45.32:255.255.255.252 (2 hosts)\n\t192.24.45.32/26 == 192.24.45.32:255.255.255.192 (62 hosts)\n\nCheck out: http://www.min.net/netmasks.htm, it has *all* the translations\nand appropriate netmasks associated with each CIDR...\n\n\n",
"msg_date": "Wed, 22 Jul 1998 12:35:34 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> OK, let me explain what I think Paul was saying. cidr is used for\n> networks. You can use it for hosts by specifying /32. It is not the\n> same as a netmask. For example:\n> \n> \thost\t192.24.45.32\n\nRight but a netmask could be specified as 255.255.255.0/32. Better yet,\nif all you want to store is a netmask in a field, use an int. Every\nnetmask can be specified in dotted notation or as a mask length.\n\n> My idea is to internally store the new type as 8 bytes:\n> \n> \t____ ____ ____ ____ ____ ___ ___ ____\n> \tcidr addr x . x . x . x ip6 ip6\n> \tbits len\n\nWhy bother with the addr len? Just expand it out with zeroes before\nstoring it.\n\nMaybe we could make cidr bits equal to -1 if we are storing a host with\nindeterminate netmask rather than setting it to 32. That allows us\nto specify raw IP numbers without faking a netmask.\n\n> Again, not sure we want to merge address and netmask for hosts in the\n> same field.\n\nWell, someone earlier suggested two different types, cidr for IPs with\nnetwork info and inet for IPs by themselves. The only argument against\nthat as I recall was that the cidr type would hold IPs alone as a special\ncase so why bother creating two different types?\n\nTo review, here, I think, are the types of data we want to store and how\nI think we can handle them with the addition of a single cidr type.\n\nIP alone can be entered as a dotted quad with no netmask. This would be\nstored as if a /32 was appended (or /-1 if we want a special flag.)\n\nIP and netmask can be entered as x.x.x.x/m or x.x.x.x:m.m.m.m. If the\nformer then store the IP and netmask. If the latter then convert the\ndotted mask to masklen and store as the former. Raise an exception if\nthe dotted mask form is invalid such as 255.255.0.255.\n\nNetwork alone can be stored the same as IP numbers. You need to specify\nthe mask length since networks can end in zeroes. Perhaps we can special\ncase inputs that don't have all 4 octets and apply the old class rules\nbut still store them like cidr addresses. There is no need to add a\nflag to differentiate networks from addresses into the type since we\nuse the field for one or the other so we know what it is when we need\nto display it. It's like using int to store both ID numbers and counts.\nThe database doesn't need to know the difference because we use any\nparticular field to store one or the other.\n\nNetmasks alone can be stored in an int field.\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 23 Jul 1998 09:58:19 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > OK, let me explain what I think Paul was saying. cidr is used for\n> > networks. You can use it for hosts by specifying /32. It is not the\n> > same as a netmask. For example:\n> > \n> > \thost\t192.24.45.32\n> \n> Right but a netmask could be specified as 255.255.255.0/32. Better yet,\n> if all you want to store is a netmask in a field, use an int. Every\n> netmask can be specified in dotted notation or as a mask length.\n\nBut we want the int to print as a IP address, and I don't think we want\ntwo types for IP addresses. Too messy.\n\n> \n> > My idea is to internally store the new type as 8 bytes:\n> > \n> > \t____ ____ ____ ____ ____ ___ ___ ____\n> > \tcidr addr x . x . x . x ip6 ip6\n> > \tbits len\n> \n> Why bother with the addr len? Just expand it out with zeroes before\n> storing it.\n\n192.0.0.1 and 192.0.0.1.0.0 are different because one is IPv6, and the\nother is not. We must keep that distinction stored somewhere. Might\nwas well use eight bytes. The padding is going to take that much in\nmost cases anyway, unless they use char (length of 1) or int2 after the\nfield.\n\n> \n> Maybe we could make cidr bits equal to -1 if we are storing a host with\n> indeterminate netmask rather than setting it to 32. That allows us\n> to specify raw IP numbers without faking a netmask.\n\nYes, that was the idea. No one wants to see a netmask of\n255.255.255.0/32. I don't want to field those support e-mails.\n\n\n> \n> > Again, not sure we want to merge address and netmask for hosts in the\n> > same field.\n> \n> Well, someone earlier suggested two different types, cidr for IPs with\n> network info and inet for IPs by themselves. The only argument against\n> that as I recall was that the cidr type would hold IPs alone as a special\n> case so why bother creating two different types?\n> \n> To review, here, I think, are the types of data we want to store and how\n> I think we can handle them with the addition of a single cidr type.\n> \n> IP alone can be entered as a dotted quad with no netmask. This would be\n> stored as if a /32 was appended (or /-1 if we want a special flag.)\n> \n> IP and netmask can be entered as x.x.x.x/m or x.x.x.x:m.m.m.m. If the\n> former then store the IP and netmask. If the latter then convert the\n> dotted mask to masklen and store as the former. Raise an exception if\n> the dotted mask form is invalid such as 255.255.0.255.\n\nNot sure if storing both IP and netmask in the same field is wise. You\nwould have:\n\n\t192.0.0.3/24\tcidr\n\t192.0.0.3:255.255.0.0 host/netmask\n\t192.0.0.3\thost, implied netmask A,B,C class?\n\t192.0.0.3/32\thost?\n\t192.0.0.3/32:255.255.255.0 host?/netmask\n\nInteresting. Comments?\n\n> \n> Network alone can be stored the same as IP numbers. You need to specify\n> the mask length since networks can end in zeroes. Perhaps we can special\n> case inputs that don't have all 4 octets and apply the old class rules\n> but still store them like cidr addresses. There is no need to add a\n> flag to differentiate networks from addresses into the type since we\n> use the field for one or the other so we know what it is when we need\n> to display it. It's like using int to store both ID numbers and counts.\n> The database doesn't need to know the difference because we use any\n> particular field to store one or the other.\n\nPrinting?\n\n> \n> Netmasks alone can be stored in an int field.\n\nAgain, we want a unified type, that makes sense to people. It must\nprint out properly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 23 Jul 1998 10:34:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Right but a netmask could be specified as 255.255.255.0/32. Better yet,\n> > if all you want to store is a netmask in a field, use an int. Every\n> > netmask can be specified in dotted notation or as a mask length.\n> \n> But we want the int to print as a IP address, and I don't think we want\n> two types for IP addresses. Too messy.\n\nWell, sure. In that case use the first form. I can't actually think\nof any case where I would need to store netmasks independently of hosts\nin any case. I'm just pointing out alternate ways to store it if you\ncan think of any use for such a thing.\n\n> > Why bother with the addr len? Just expand it out with zeroes before\n> > storing it.\n> \n> 192.0.0.1 and 192.0.0.1.0.0 are different because one is IPv6, and the\n> other is not. We must keep that distinction stored somewhere. Might\n> was well use eight bytes. The padding is going to take that much in\n> most cases anyway, unless they use char (length of 1) or int2 after the\n> field.\n\nYes, I am not as up on IPv6 as I would like to be. However, I thought\nthat IPv6 addresses were IPv4 addresses with extra octets *pre*pended.\nAnyway, I suspect that either way the IPv6 addresses would have non\nzero bits added so zeroes in the extra bits could be the flag for IPv4\naddresses.\n\nHmm. How do we handle the different sized netmask lengths?\n\n> > Maybe we could make cidr bits equal to -1 if we are storing a host with\n> > indeterminate netmask rather than setting it to 32. That allows us\n> > to specify raw IP numbers without faking a netmask.\n> \n> Yes, that was the idea. No one wants to see a netmask of\n> 255.255.255.0/32. I don't want to field those support e-mails.\n\nAgain, storing netmasks themselves seems so anomalous that I do tend\nto not worry to much about it. Normally if we are interested in a\nnetmask we are also interested in the host IP so we would store\nsomething like \"192.3.4.5/24\" and, if we need the netmask, use the\nnetmask function.\n\n netmask('192.3.4.5/24::cidr') == 255.255.255.0\n masklen('192.3.4.5/24::cidr') == 24\n host('192.3.4.5/24::cidr') == 192.3.4.5\n network('192.3.4.5/24::cidr') == 192.3.4.0\n\nand perhaps;\n\n class('192.3.4.5/24::cidr') == C\n classnet('192.3.4.5/24::cidr') == 192.3.4\n\n> Not sure if storing both IP and netmask in the same field is wise. You\n> would have:\n\nI thought that that was the idea to begin with.\n\n> \t192.0.0.3/24\tcidr\nRight.\n\n> \t192.0.0.3:255.255.0.0 host/netmask\nConverted internally to 192.0.0.3:/16\n\n> \t192.0.0.3\thost, implied netmask A,B,C class?\nLetting this convert automatically to a C class may not be what was\ndesired. Better to specify the netmask. You may be subnetting it\nor even supernetting it.\n\n> \t192.0.0.3/32\thost?\nI would suggest that 192.0.0.3 should be the same thing unless we have\na mask len of -1 to signal indeterminate mask length in which case\n192.0.0.3 gets converted internally to 192.0.0.3/-1. Further, printing\na cidr with mask len of 32 (or -1) should print as if the host function\nwere called, that is don't print the network info in such cases.\n\n> \t192.0.0.3/32:255.255.255.0 host?/netmask\nBut 192.0.0.3/24 or 192.0.0.3:255.255.255.0 gives all the information\nthat you need.\n\n> > to display it. It's like using int to store both ID numbers and counts.\n> > The database doesn't need to know the difference because we use any\n> > particular field to store one or the other.\n> \n> Printing?\n\nYou mean printing netmasks? As I said, it seems to me that netmasks will\nalways be paired with a host or network but perhaps we can set up the\nfunction table so that netmask on an integer type converts to a netmask\nin the form you suggest. That would be the truly oo way to do it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Thu, 23 Jul 1998 23:36:59 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> I thought that that was the idea to begin with.\n> \n> > \t192.0.0.3/24\tcidr\n> Right.\n> \n> > \t192.0.0.3:255.255.0.0 host/netmask\n> Converted internally to 192.0.0.3:/16\n\nThis is a problem. Suppose you have:\n\n\t192.0.0.0:255.255.255.0\n\nThis is a host with netmask, while:\n\n\t192.0.0.0/24\n\nis a network address. Paul?\n\n> \n> > \t192.0.0.3\thost, implied netmask A,B,C class?\n> Letting this convert automatically to a C class may not be what was\n> desired. Better to specify the netmask. You may be subnetting it\n> or even supernetting it.\n> \n> > \t192.0.0.3/32\thost?\n> I would suggest that 192.0.0.3 should be the same thing unless we have\n> a mask len of -1 to signal indeterminate mask length in which case\n> 192.0.0.3 gets converted internally to 192.0.0.3/-1. Further, printing\n> a cidr with mask len of 32 (or -1) should print as if the host function\n> were called, that is don't print the network info in such cases.\n\n\nYep.\n\n> \n> > \t192.0.0.3/32:255.255.255.0 host?/netmask\n> But 192.0.0.3/24 or 192.0.0.3:255.255.255.0 gives all the information\n> that you need.\n\nSee example above. You use the 3 here to know it is a host, because the\nIP address extens past the netmask, but what if they are zeros?\n\n> You mean printing netmasks? As I said, it seems to me that netmasks will\n> always be paired with a host or network but perhaps we can set up the\n> function table so that netmask on an integer type converts to a netmask\n> in the form you suggest. That would be the truly oo way to do it.\n\nCertainly we could, but it seems nice to have one type just for ip-type\nstuff.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 00:14:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> OK, let me explain what I think Paul was saying. cidr is used for\n> networks. You can use it for hosts by specifying /32. It is not the\n> same as a netmask. For example:\n> \n> \thost\t192.24.45.32\n> \n> Now, this is a host address. We can say its netmask is 255.255.255.0,\n> or was can say it is part of network 192.24.45/24, which would allow you\n> compute the netmask as 255.255.255.0. The problem is that you need the\n> type to support cidr, hosts, and netmasks.\n\nin that case \"hosts\" and \"netmasks\" are completely unrelated to \"cidr\"'s\nand no design should try to cover all three similar-sounding-but-different\nneeds.\n\n> My idea is to internally store the new type as 8 bytes:\n> \n> \t____ ____ ____ ____ ____ ___ ___ ____\n> \tcidr addr x . x . x . x ip6 ip6\n> \tbits len\n> \n> That way, if they specify cidr bits, we store it. If they don't we make\n> the bits field equal -1, and print/sort appropriately. The addr len is\n> usually 3, but ip6 is also easy to add by making the addr len equal 6.\n\nouch!\n\nthe cidr i posted has an address family. there was a reason for that.\n",
"msg_date": "Fri, 24 Jul 1998 00:30:23 -0700",
"msg_from": "Paul A Vixie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr "
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > > \t192.0.0.3:255.255.0.0 host/netmask\n> > Converted internally to 192.0.0.3:/16\n> \n> This is a problem. Suppose you have:\n> \n> \t192.0.0.0:255.255.255.0\n> \n> This is a host with netmask, while:\n> \n> \t192.0.0.0/24\n> \n> is a network address. Paul?\n\nI believe that these two representations refer to the same thing. Whether\nthat thing is a network or an address depends on the application. Either\nthe column is being used to store networks or hosts. That's what I was\ngetting at with my previous analogy with int types. An int could hold\nordinal numbers like IDs or it could hold quantities. We don't need\nthe data type to store which. The application knows and we don't store\nID codes and counts in the same column. The same with IP numbers. We\ndecide in any particular application whether a column is a list of hosts\nor a list of networks and we then populate it.\n\nI do like the idea of using attypmod to define the form of the type.\nI assume we can use that to determine the output format, that is, use\nit to effectively apply one of the functions to it. That makes for\na clean use of the type.\n\n> > > \t192.0.0.3/32:255.255.255.0 host?/netmask\n> > But 192.0.0.3/24 or 192.0.0.3:255.255.255.0 gives all the information\n> > that you need.\n> \n> See example above. You use the 3 here to know it is a host, because the\n> IP address extens past the netmask, but what if they are zeros?\n\nTechnically, 192.0.0.0/24 is a valid host on 192.0.0 although most\npeople avoid it because some older equipment doesn't handle it very\nwell.\n\n> > You mean printing netmasks? As I said, it seems to me that netmasks will\n> > always be paired with a host or network but perhaps we can set up the\n> > function table so that netmask on an integer type converts to a netmask\n> > in the form you suggest. That would be the truly oo way to do it.\n> \n> Certainly we could, but it seems nice to have one type just for ip-type\n> stuff.\n\nI agree. I'm just saying that we can add the netmask function to integer\nas well. That gives someone the flexibility to store it either way.\nHowever, I don't think I am going to speak to this point again until\nsomeone can give me a single example of a requirement for storing\nnetmasks independent of any hosts or networks. :-)\n\nI just thought of another useful function.\n\n broadcast('192.3.4.5/24::cidr') == 192.3.4.255\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 24 Jul 1998 07:50:43 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Paul A Vixie\n> the cidr i posted has an address family. there was a reason for that.\n\nI know that you posted the actual code but perhaps you can give us the\n50 cent tour of what you see the type doing and what has to be stored.\n\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 24 Jul 1998 07:57:36 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > > > \t192.0.0.3:255.255.0.0 host/netmask\n> > > Converted internally to 192.0.0.3:/16\n> > \n> > This is a problem. Suppose you have:\n> > \n> > \t192.0.0.0:255.255.255.0\n> > \n> > This is a host with netmask, while:\n> > \n> > \t192.0.0.0/24\n> > \n> > is a network address. Paul?\n> \n> I believe that these two representations refer to the same thing. Whether\n> that thing is a network or an address depends on the application. Either\n> the column is being used to store networks or hosts. That's what I was\n> getting at with my previous analogy with int types. An int could hold\n> ordinal numbers like IDs or it could hold quantities. We don't need\n> the data type to store which. The application knows and we don't store\n> ID codes and counts in the same column. The same with IP numbers. We\n> decide in any particular application whether a column is a list of hosts\n> or a list of networks and we then populate it.\n> \n> I do like the idea of using attypmod to define the form of the type.\n> I assume we can use that to determine the output format, that is, use\n> it to effectively apply one of the functions to it. That makes for\n> a clean use of the type.\n\nOK. Sounds good to me. The only problem is display. If we don't\nindicate whether it is a cidr or host/netmask on column creation or\ninsertion, how do we display it so it makes sense? Always cidr?\n\n\n> \n> > > > \t192.0.0.3/32:255.255.255.0 host?/netmask\n> > > But 192.0.0.3/24 or 192.0.0.3:255.255.255.0 gives all the information\n> > > that you need.\n> > \n> > See example above. You use the 3 here to know it is a host, because the\n> > IP address extens past the netmask, but what if they are zeros?\n> \n> Technically, 192.0.0.0/24 is a valid host on 192.0.0 although most\n> people avoid it because some older equipment doesn't handle it very\n> well.\n> \n> > > You mean printing netmasks? As I said, it seems to me that netmasks will\n> > > always be paired with a host or network but perhaps we can set up the\n> > > function table so that netmask on an integer type converts to a netmask\n> > > in the form you suggest. That would be the truly oo way to do it.\n> > \n> > Certainly we could, but it seems nice to have one type just for ip-type\n> > stuff.\n> \n> I agree. I'm just saying that we can add the netmask function to integer\n> as well. That gives someone the flexibility to store it either way.\n> However, I don't think I am going to speak to this point again until\n> someone can give me a single example of a requirement for storing\n> netmasks independent of any hosts or networks. :-)\n\nOK. Why not?\n\n> \n> I just thought of another useful function.\n> \n> broadcast('192.3.4.5/24::cidr') == 192.3.4.255\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 12:00:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > I do like the idea of using attypmod to define the form of the type.\n> > I assume we can use that to determine the output format, that is, use\n> > it to effectively apply one of the functions to it. That makes for\n> > a clean use of the type.\n> \n> OK. Sounds good to me. The only problem is display. If we don't\n> indicate whether it is a cidr or host/netmask on column creation or\n> insertion, how do we display it so it makes sense? Always cidr?\n\nWell, I guess we just decide on a default format if it is not defined.\nI think the default should be display as cidr (x.x.x.x/y) except omit\nthe mask length if it is 32 (or -1 if we go with that usage.) Perhaps\nmake one of the defined types always display cidr even in thos special\ncases.\n\n> > I agree. I'm just saying that we can add the netmask function to integer\n> > as well. That gives someone the flexibility to store it either way.\n> > However, I don't think I am going to speak to this point again until\n> > someone can give me a single example of a requirement for storing\n> > netmasks independent of any hosts or networks. :-)\n> \n> OK. Why not?\n\nI'm just saying that given that there isn't any useful situation where\nwe might want to store netmasks alone independent of IPs, I don't see\nmuch point in arguing how many IPs can dance on the end of a netmask.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 25 Jul 1998 00:08:20 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > > I do like the idea of using attypmod to define the form of the type.\n> > > I assume we can use that to determine the output format, that is, use\n> > > it to effectively apply one of the functions to it. That makes for\n> > > a clean use of the type.\n> > \n> > OK. Sounds good to me. The only problem is display. If we don't\n> > indicate whether it is a cidr or host/netmask on column creation or\n> > insertion, how do we display it so it makes sense? Always cidr?\n> \n> Well, I guess we just decide on a default format if it is not defined.\n> I think the default should be display as cidr (x.x.x.x/y) except omit\n> the mask length if it is 32 (or -1 if we go with that usage.) Perhaps\n> make one of the defined types always display cidr even in thos special\n> cases.\n\nWhere are we with this? I think Harouth took this on.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 11 Aug 1998 16:59:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cidr"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am new to PostgreSQL. I was wondering if there is a way to recall\nprevious commands at the PSQL prompt, this would save me the hassle of\ntyping the same SQL commands again. Thanks in advance.\n\n\nBest Regards,\n\nChee Seng\n([email protected])\n\n",
"msg_date": "Mon, 20 Jul 1998 09:31:51 +0800",
"msg_from": "\"Gnoh, Chee Seng\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recalling previous commands at the PSQL prompt"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Gnoh, Chee Seng wrote:\n\n> Hi All,\n> \n> I am new to PostgreSQL. I was wondering if there is a way to recall\n> previous commands at the PSQL prompt, this would save me the hassle of\n> typing the same SQL commands again. Thanks in advance.\n\nUse the up-arrow. If that doesn't work, you probably haven't got \nlibreadline/libhistory installed, or they are installed in an unusual \nplace where configure couldn't find them.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 20 Jul 1998 09:07:34 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Recalling previous commands at the PSQL prompt"
},
{
"msg_contents": "Maarten Boekhold wrote:\n> \n> On Mon, 20 Jul 1998, Gnoh, Chee Seng wrote:\n> \n> > Hi All,\n> >\n> > I am new to PostgreSQL. I was wondering if there is a way to recall\n> > previous commands at the PSQL prompt, this would save me the hassle of\n> > typing the same SQL commands again. Thanks in advance.\n> \n> Use the up-arrow. If that doesn't work, you probably haven't got\n> libreadline/libhistory installed, or they are installed in an unusual\n> place where configure couldn't find them.\n\nHow about giving psql command history a longer life, like bash does it\n(.bash_history)?\n\n\nGene\n",
"msg_date": "Mon, 20 Jul 1998 14:51:05 +0000",
"msg_from": "\"Gene Selkov, Jr.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Recalling previous commands at the PSQL prompt"
}
] |
[
{
"msg_contents": "We currently use Name sometimes, and char* other times to store\nrelation, attribute, type, and view names.\n\nOne thing Mariposa did was to make that more consistent, so you passed\naround Name(NameData pointers) instead of the more generic char *.\nHowever, the Name fields behave like char*, but are clearer.\n\n\ttypedef struct nameData\n\t{\n\t char data[NAMEDATALEN];\n\t} NameData;\n\ttypedef NameData *Name;\n\nDo people see value in making this switch? Would take me a few hours to\nmake the change.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 00:48:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Name type vs. char *"
},
{
"msg_contents": "> We currently use Name sometimes, and char* other times to store\n> relation, attribute, type, and view names.\n> \n> One thing Mariposa did was to make that more consistent, so you passed\n> around Name(NameData pointers) instead of the more generic char *.\n> However, the Name fields behave like char*, but are clearer.\n> \n> \ttypedef struct nameData\n> \t{\n> \t char data[NAMEDATALEN];\n> \t} NameData;\n> \ttypedef NameData *Name;\n> \n> Do people see value in making this switch? Would take me a few hours to\n> make the change.\n\nI have decided I don't even like this change. The confusion is because\nof the on-disk name storage vs. query-supplied names. I will add this\nto the developers FAQ. This has confused me, so I assume others may be\nconfused about the distinction.\n\nComments?\n\n---------------------------------------------------------------------------\n\nWhy are table, column, type, function, view names sometimes referenced\nas Name or NameData, and sometimes as char *?\n\nTable, column, type, function, and view names are stored in system\ntables in columns of type Name. Name is a fixed-length, null-terminated\ntype of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 32\nbytes.)\n\n\ttypedef struct nameData\n\t{\n\t char data[NAMEDATALEN];\n\t} NameData;\n\ttypedef NameData *Name;\n\nTable, column, type, function, and view names that come in to the\nbackend via user queries are stored as variable-length, null-terminated\ncharacter strings.\n\nMany functions are called with both types of names, ie. heap_open(). \nBecause the Name type is null-terminated, it is safe to pass it to a\nfunction expecting a char *. Because there are many cases where on-disk\nnames(Name) are compared to user-supplied names(char *), there are many\ncases where Name and char * are coerced to match each other.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 05:37:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Name type vs. char *"
}
] |
[
{
"msg_contents": "I am trying to finish off my Python interface with some extra helper\nfunctions and I need to find the primary key in a table if it exists.\nI have two questions.\n\nAlthough I can't imagine doing so, will the system allow you to create\nmore than one primary key on a table? I just need to know whether I\nned to test for multiple keys.\n\nCan someone suggest a SQL statement to pull out the primary key(s) from\na table?\n\nAlso, if multiple keys are allowed, what are people's opinions about\nusing them? Basically I am creating a get function that is defined as:\n\ndef db_get(db, cl, arg, keyname = None):\n\nwhere db is the database handle, cl is the class, arg is either a value\nto lookup or a dictionary containing the value and keyname is the\nfield to lookup which defaults to the primary key. The question is,\nwhat do I do if keyname is omitted (defaults to primary) and there\nare two primary keys. Should I just use the first one or should I\nraise an exception. I favour the latter.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 20 Jul 1998 09:22:14 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Finding primary keys in a table"
},
{
"msg_contents": "> I am trying to finish off my Python interface with some extra helper\n> functions and I need to find the primary key in a table if it exists.\n> I have two questions.\n> \n> Although I can't imagine doing so, will the system allow you to create\n> more than one primary key on a table? I just need to know whether I\n> ned to test for multiple keys.\n> \n> Can someone suggest a SQL statement to pull out the primary key(s) from\n> a table?\n> \n> Also, if multiple keys are allowed, what are people's opinions about\n> using them? Basically I am creating a get function that is defined as:\n> \n> def db_get(db, cl, arg, keyname = None):\n> \n> where db is the database handle, cl is the class, arg is either a value\n> to lookup or a dictionary containing the value and keyname is the\n> field to lookup which defaults to the primary key. The question is,\n> what do I do if keyname is omitted (defaults to primary) and there\n> are two primary keys. Should I just use the first one or should I\n> raise an exception. I favour the latter.\n\nBecause we just create a unique index on a PRIMARY specification, I\nthink any unique index on a field shows it as primary.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 11:44:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > Can someone suggest a SQL statement to pull out the primary key(s) from\n> > a table?\n> Because we just create a unique index on a PRIMARY specification, I\n> think any unique index on a field shows it as primary.\n\nHmm. Any chance we can somehow flag it as well? Perhaps a new bool\nfield in pg_index the next time we do a dump & reload release? I\nassume we will need it eventually anyway.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 20 Jul 1998 15:11:23 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
},
{
"msg_contents": "> Thus spake Bruce Momjian\n> > > Can someone suggest a SQL statement to pull out the primary key(s) from\n> > > a table?\n> > Because we just create a unique index on a PRIMARY specification, I\n> > think any unique index on a field shows it as primary.\n> \n> Hmm. Any chance we can somehow flag it as well? Perhaps a new bool\n> field in pg_index the next time we do a dump & reload release? I\n> assume we will need it eventually anyway.\n\nYes, we will. The question is when to add it. Probably best to wait\nuntil we do the whole thing. If we do it now, it is probable it will\nchange when we add foreign keys.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 15:18:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
},
{
"msg_contents": "> > Because we just create a unique index on a PRIMARY specification, I\n> > think any unique index on a field shows it as primary.\n> Hmm. Any chance we can somehow flag it as well? Perhaps a new bool\n> field in pg_index the next time we do a dump & reload release? I\n> assume we will need it eventually anyway.\n\nI'm not sure I understand all the issues, but if we can avoid\ndistinctions between different indices that would be A Good Thing. Since\nmultiple unique indices are allowed, what would be the extra\nfunctionality of having one designated \"primary\"? Is it an arbitrary\nSQL92-ism which fits with older databases, or something which enables\nnew and interesting stuff?\n\n - Tom\n",
"msg_date": "Tue, 21 Jul 1998 01:41:21 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
},
{
"msg_contents": "Thus spake Thomas G. Lockhart\n> I'm not sure I understand all the issues, but if we can avoid\n> distinctions between different indices that would be A Good Thing. Since\n> multiple unique indices are allowed, what would be the extra\n> functionality of having one designated \"primary\"? Is it an arbitrary\n> SQL92-ism which fits with older databases, or something which enables\n> new and interesting stuff?\n\nWell, in database design there is a distinction between indeces and\nkeys. Being able to specify this distinction in the database seems\nuseful to me from a database designer perspective.\n\nHere's how I use that distinction in my Python code.\n\ndata = db_get(db, customer, client_id)\ndata = db_get(db, province, data)\n\nThe first line gets the client record based on the client ID. The next\nline gets the information on the client's province such as full name,\ntax rate, etc. It does this because it knows that prov, the two letter\ncode field, is the primary key on province and it can find the prov\nfield in the data dictionary which came from the customer class. This\nis similar to how some 4GLs do it.\n\nFIND FIRST customer WHERE client_id = x, province OF customer.\n\nCertainly there are alternate ways of doing this but it is nice to be\nable to put as much information into the RDBMS as possible. Codd's\nfirst rule might be used in support of this.\n\n \"All information represented only in tables.\"\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 21 Jul 1998 09:33:08 -0400 (EDT)",
"msg_from": "[email protected] (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
}
] |
[
{
"msg_contents": "Vadim (or anyone else), can you comment on the use of\nexec_tlist_length() vs. ExecTargetListLength(). This was changed in\nMariposa by removing the first one.\n\nThe first is called only in the planner.c, and computes the length as:\n\n len = 0;\n foreach(tl, targetlist)\n {\n curTle = lfirst(tl);\n\n if (curTle->resdom != NULL)\n len++;\n }\n return len;\n\nwhile ExecTargetListLength() uses:\n\n len = 0;\n foreach(tl, targetlist)\n {\n curTle = lfirst(tl);\n\n if (curTle->resdom != NULL)\n len++;\n else\n len += curTle->fjoin->fj_nNodes;\n }\n return len;\n\nThe second counts resdom as one, or add fj_nNodes. Which is correct, or\nare they used for different purposes. Seems like the second is more\ncorrect.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 13:02:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "exec_tlist_length"
}
] |
[
{
"msg_contents": "The weekend's hacking on s_lock.h broke it for all platforms that\nneed non-default definitions of S_UNLOCK or S_INIT_LOCK (hpux,\nalpha, a couple others). Someone put unconditional definitions\nof those macros at the bottom of the file. I suspect this was a\nplain old editing typo, but perhaps the intent was to put such\ndefinitions in one of the platform-specific #if blocks? (If so,\nthey were unnecessary anyway.) Anyhow, the attached patch fixes\nit for hpux.\n\n\t\t\tregards, tom lane\n\n\n*** src/include/storage/s_lock.h.orig\tMon Jul 20 12:05:59 1998\n--- src/include/storage/s_lock.h\tMon Jul 20 13:04:49 1998\n***************\n*** 323,332 ****\n #define TAS(lock)\t\ttas((volatile slock_t *) lock)\n #endif /* TAS */\n \n- #define S_UNLOCK(lock) (*(lock) = 0)\n- \n- #define S_INIT_LOCK(lock) S_UNLOCK(lock)\n- \n \n #endif /* HAS_TEST_AND_SET */\n #endif /* S_LOCK_H */\n--- 323,328 ----\n",
"msg_date": "Mon, 20 Jul 1998 13:12:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "s_lock.h busted"
},
{
"msg_contents": "> The weekend's hacking on s_lock.h broke it for all platforms that\n> need non-default definitions of S_UNLOCK or S_INIT_LOCK (hpux,\n> alpha, a couple others). Someone put unconditional definitions\n> of those macros at the bottom of the file. I suspect this was a\n> plain old editing typo, but perhaps the intent was to put such\n> definitions in one of the platform-specific #if blocks? (If so,\n> they were unnecessary anyway.) Anyhow, the attached patch fixes\n> it for hpux.\n> \n\nIt came in from:\n\n\tSomewhere between 6.1 and 6.3 someone removed the support for the\n\tNS32K machine I contributed. In any case, I now have postgresql-6.3\n\trunning again on NetBSD/pc532, a NS32532 machine. The following\n\tchanges are needed relative to the src directory. (It looks like\n\tsupport was partially removed when the files were moved from the\n\tsrc/backend/storage/.... tree to the src/include tree.)\n\t\n\tIf you need me to get a current development version of postgresql\n\tfor this change let me know. Also, let me know if this code needs\n\tupdating due to another code movement that deleted the old NS32K\n\tsupport.\n\t\n\tThank you.\n\t\n\tPhil Nelson\n\nFix applied.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 13:46:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h busted"
},
{
"msg_contents": "> \n> The weekend's hacking on s_lock.h broke it for all platforms that\n> need non-default definitions of S_UNLOCK or S_INIT_LOCK (hpux,\n> alpha, a couple others). Someone put unconditional definitions\n> of those macros at the bottom of the file. I suspect this was a\n> plain old editing typo, but perhaps the intent was to put such\n> definitions in one of the platform-specific #if blocks? (If so,\n> they were unnecessary anyway.) Anyhow, the attached patch fixes\n> it for hpux.\n> \n> \t\t\tregards, tom lane\n> \n> \n> *** src/include/storage/s_lock.h.orig\tMon Jul 20 12:05:59 1998\n> --- src/include/storage/s_lock.h\tMon Jul 20 13:04:49 1998\n> ***************\n> *** 323,332 ****\n> #define TAS(lock)\t\ttas((volatile slock_t *) lock)\n> #endif /* TAS */\n> \n> - #define S_UNLOCK(lock) (*(lock) = 0)\n> - \n> - #define S_INIT_LOCK(lock) S_UNLOCK(lock)\n> - \n> \n> #endif /* HAS_TEST_AND_SET */\n> #endif /* S_LOCK_H */\n> --- 323,328 ----\n> \n\n\nArrrrgggghhh!!!!\n\nOk, I'm calmer now...\n\nThese were meant to be in the conditional blocks at the end of the file so\nthat if (and only if) no definition existed we would get a default. So:\n\n#ifndef S_UNLOCK\n#define S_UNLOCK(lock) (*(lock) = 0)\n#endif\n\n#ifndef S_INIT_LOCK\n#define S_INIT_LOCK(lock) S_UNLOCK(lock)\n#endif\n\nI am a little concerned about the recent batch of patches made to this code.\nI was planning a cleanup patch to resolve all the issues raised, but kept\nseeing other patches and since I got badly burned by a merge conflict I was\nhoping it would settle down a little. Sigh...\n\nPerhaps I need to pull the latest tree again and see where we have gotten\nto.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Mon, 20 Jul 1998 12:15:46 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] s_lock.h busted"
},
{
"msg_contents": "[email protected] (David Gould) writes:\n> Arrrrgggghhh!!!!\n> These were meant to be in the conditional blocks at the end of the file so\n> that if (and only if) no definition existed we would get a default. So:\n> #ifndef S_UNLOCK\n> #define S_UNLOCK(lock) (*(lock) = 0)\n> #endif\n> #ifndef S_INIT_LOCK\n> #define S_INIT_LOCK(lock) S_UNLOCK(lock)\n> #endif\n\nRight, but those default definitions were *already there*.\n\nThe lines I was complaining about were added immediately after the\ndefault definitions, and overrode *any* prior definition of the macros.\nAs far as I can see they were just a typo/thinko.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Jul 1998 15:41:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] s_lock.h busted "
}
] |
[
{
"msg_contents": "I have found cvs creating many directories that I know were removed from\nthe source tree. I now use the -P option to cvs update/checkout to\nprevent this from happening.\n\n -P Prune (remove) directories that are empty after\n being updated, on checkout, or update. Normally,\n an empty directory (one that is void of revision-\n controlled files) is left alone. Specifying -P\n will cause these directories to be silently removed\n from your checked-out sources. This does not\n remove the directory from the repository, only from\n your checked out copy. Note that this option is\n implied by the -r or -D options of checkout and\n export.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 16:45:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "cvs and empty directories"
},
{
"msg_contents": "On Mon, 20 Jul 1998, Bruce Momjian wrote:\n\n> I have found cvs creating many directories that I know were removed from\n> the source tree. I now use the -P option to cvs update/checkout to\n> prevent this from happening.\n> \n> -P Prune (remove) directories that are empty after\n> being updated, on checkout, or update. Normally,\n> an empty directory (one that is void of revision-\n> controlled files) is left alone. Specifying -P\n> will cause these directories to be silently removed\n> from your checked-out sources. This does not\n> remove the directory from the repository, only from\n> your checked out copy. Note that this option is\n> implied by the -r or -D options of checkout and\n> export.\n\n\tThe general recommendation that I've received concerning this is\nto run:\n\n\tcvs -q update -APd\n\n\t-q puts it in a quiet mode, so only changes are reported\n\t-A removes any sticky tags to give you CURRENT sources\n\t-P Prunes (as above)\n\t-d creates any new directories required\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Jul 1998 21:22:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvs and empty directories"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \tThe general recommendation that I've received concerning this is\n> to run:\n> \tcvs -q update -APd\n> \t-q puts it in a quiet mode, so only changes are reported\n> \t-A removes any sticky tags to give you CURRENT sources\n> \t-P Prunes (as above)\n> \t-d creates any new directories required\n\nUnless you have a fast connection to hub.org, another good switch is\n\"-z3\" to enable use of gzip compression on the cvs server connection.\n(Marc presumably doesn't need this, but I sure do.)\n\n-z is a \"generic\" switch that applies to all cvs ops not just update,\nso it goes on the left side of the update keyword:\n\n\tcvs -q -z3 update -APd\n\nBTW, you can use a ~/.cvsrc file to set default switches and not\nhave to remember to supply them. I use\n\ncvs -z3\nupdate -d -P\n\nso I can just type \"cvs update\". Any other cvs command that I issue\nwill also automatically get -z3, which makes cvs a lot more usable\nover a modem link.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Jul 1998 11:14:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvs and empty directories "
}
] |
[
{
"msg_contents": "> Vadim (or anyone else), can you comment on the use of\n> exec_tlist_length() vs. ExecTargetListLength(). This was changed in\n> Mariposa by removing the first one.\n> \n> The first is called only in the planner.c, and computes the length as:\n> \n> len = 0;\n> foreach(tl, targetlist)\n> {\n> curTle = lfirst(tl);\n> \n> if (curTle->resdom != NULL)\n> len++;\n> }\n> return len;\n> \n> while ExecTargetListLength() uses:\n> \n> len = 0;\n> foreach(tl, targetlist)\n> {\n> curTle = lfirst(tl);\n> \n> if (curTle->resdom != NULL)\n> len++;\n> else\n> len += curTle->fjoin->fj_nNodes;\n> }\n> return len;\n> \n> The second counts resdom as one, or add fj_nNodes. Which is correct, or\n> are they used for different purposes. Seems like the second is more\n> correct.\n\nI have done some more research and found fj_nNodes is not used\n(referenced in the FIXED_SETS code), so I have removed the first\nfunction and make them all reference the second.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 20 Jul 1998 16:52:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: exec_tlist_length"
}
] |
[
{
"msg_contents": "Hi,\n\nI grabbed the latest from CVS and everything compiles fine and works\nwell.\n\nRegression only fails int8, float8, geometry and union (union all clauses)\n\nI'll be having another play later with my test DB.\n\nKeith.\n\n\nBruce Momjian <[email protected]>\n\n> \n> Patch applied.\n> \n> \n> > I haven't seen any followups to this, but I finally got around to\n> > compiling the system again myself, and David's fix is not quite right.\n\n\n",
"msg_date": "Mon, 20 Jul 1998 22:21:29 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] s_lock.h problem on S/Linux"
}
] |
[
{
"msg_contents": "As people analyze parts of the source tree, I would appreciate a README\nwriteup of the file names in the directory and their use, or the purpose\nof the directory. You can see examples of some of them already in the\nsources.\n\nI have done quite a bit of developer documentation, and that has\ncertainly helped people contribute to the project. I need others to\nincluded their knowledge into the Developers FAQ, the backend flowchart,\nor README files so we can continue to accumulate knowledge of the source\ncode that will help new developers get up-to-speed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:06:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "README files"
}
] |
[
{
"msg_contents": "> browse: <http://www.msnbc.com/news/181503.asp>. Thanks\n> to Greg Smith <[email protected]> for forwarding.\n\n After shying away from the Linux platform for several months,\n Informix Corp. will do an about face at its international users\n conference in Seattle this week. Archrival Oracle Corp. is\n expected to put its stamp on approval on Linux this week as\n well, by announcing plans to do a Linux port of its Oracle\n database, according to sources.\n\nOoh. We're getting some serious company. Wonder if they'll be able to\ncatch up with Postgres :)\n\n - Tom\n",
"msg_date": "Tue, 21 Jul 1998 05:21:43 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: SGVLLUG Oracle and Informix on Linux]"
},
{
"msg_contents": "> > browse: <http://www.msnbc.com/news/181503.asp>. Thanks\n> > to Greg Smith <[email protected]> for forwarding.\n> \n> After shying away from the Linux platform for several months,\n> Informix Corp. will do an about face at its international users\n> conference in Seattle this week. Archrival Oracle Corp. is\n> expected to put its stamp on approval on Linux this week as\n> well, by announcing plans to do a Linux port of its Oracle\n> database, according to sources.\n> \n> Ooh. We're getting some serious company. Wonder if they'll be able to\n> catch up with Postgres :)\n\nIngres II is going to release on Linux too. So now we have Informix,\nOracle, and Ingres to compete with. Yikes.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 01:36:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on Linux]"
},
{
"msg_contents": "On Tue, 21 Jul 1998, Bruce Momjian wrote:\n\n> > > browse: <http://www.msnbc.com/news/181503.asp>. Thanks\n> > > to Greg Smith <[email protected]> for forwarding.\n> > \n> > After shying away from the Linux platform for several months,\n> > Informix Corp. will do an about face at its international users\n> > conference in Seattle this week. Archrival Oracle Corp. is\n> > expected to put its stamp on approval on Linux this week as\n> > well, by announcing plans to do a Linux port of its Oracle\n> > database, according to sources.\n> > \n> > Ooh. We're getting some serious company. Wonder if they'll be able to\n> > catch up with Postgres :)\n> \n> Ingres II is going to release on Linux too. So now we have Informix,\n> Oracle, and Ingres to compete with. Yikes.\n\n\tCompete with? They are all releasing free versions for Linux, vs\nthe 10's of thousands of dollars they cost for the other operating\nsystems? :)\n\n\n",
"msg_date": "Tue, 21 Jul 1998 08:06:51 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on Linux]"
},
{
"msg_contents": "> On Tue, 21 Jul 1998, Bruce Momjian wrote:\n> \n> > > > browse: <http://www.msnbc.com/news/181503.asp>. Thanks\n> > > > to Greg Smith <[email protected]> for forwarding.\n> > > \n> > > After shying away from the Linux platform for several months,\n> > > Informix Corp. will do an about face at its international users\n> > > conference in Seattle this week. Archrival Oracle Corp. is\n> > > expected to put its stamp on approval on Linux this week as\n> > > well, by announcing plans to do a Linux port of its Oracle\n> > > database, according to sources.\n> > > \n> > > Ooh. We're getting some serious company. Wonder if they'll be able to\n> > > catch up with Postgres :)\n> > \n> > Ingres II is going to release on Linux too. So now we have Informix,\n> > Oracle, and Ingres to compete with. Yikes.\n> \n> \tCompete with? They are all releasing free versions for Linux, vs\n> the 10's of thousands of dollars they cost for the other operating\n> systems? :)\n\n[Informix, Oracle, and Ingres will be releasing versions of their\ndatabase engines under Linux in the future.]\n\nOK, let's discuss this. How does this affect us? With all three\nreleasing around the same time, they really dilute themselves. I can't\nimagine most people trying more than one of the commercial alternatives.\n\nCertain people will be tempted by a commercial SQL server, while others\nwill prefer us because of:\n\n\tfeatures\n\tinstalled base\n\topen source\n\tsupport\n\tprice(some are free)\n\nIs there anything we need to do to prevent loss of user base?\n\nAlso, I was reading a thread on comp.databases that was discussing free\ndatabase alternatives, and no one had mentioned PostgreSQL. We need\npeople to spread the word about PostgreSQL in all the forums they\nfrequent. Just point them to www.postgresql.org, and they can look at\nit themselves. If they have heard of it, but don't use it, please tell\nus why so we can clearly address those issues. We need people to get\nmore involved in promoting us.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 10:58:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on Linux]"
},
{
"msg_contents": "> OK, let's discuss this. How does this affect us? With all three\n> releasing around the same time, they really dilute themselves. I can't\n> imagine most people trying more than one of the commercial alternatives.\n\nI offer myself up as a case study...\n\nI will likely use Oracle (or one of the other two) for some things, and\nPostgreSQL for other things. Where expense is the key issue for a\ncustomer, PostgreSQL. Where cost is less of a factor, Oracle.\n\nI say this with these (mostly uninformed) assumptions in mind. Oracle's\nODBC driver is probably more complete. Oracle is better documented. Oracle\nhas a lot of related tools. Oracle offers training.\n\n> Certain people will be tempted by a commercial SQL server, while others\n> will prefer us because of:\n> \n> \tfeatures\n\nAs many posts I see to this list are \"how do I do this\" - \"not\nimplemented, wait for a later version\", I'm not sure why you would make\nthis claim. Again, I'm not a person who spends a great deal of time on\ndatabases and I do consider myself uninformed.\n\n> \tinstalled base\n\nPostgreSQL coming preinstalled with RedHat Linux 5.1 was the sole reason I\nselected it. It was just too convenient.\n\n> \topen source\n\nWhile I can appreciate this, it is not a requirement. Without a background\nin database related knowledge, I would probably do more harm than good in\nthe short term, and no time for a long term investment in changes.\n\n> \tsupport\n\nThe mailing lists are nice. I appreciate them very much. There's probably\na mailing list for Oracle. What more is there for support?\n\n> \tprice(some are free)\n\nThis is the significant advantage of PostgreSQL to me.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n",
"msg_date": "Tue, 21 Jul 1998 11:51:23 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n| OK, let's discuss this. How does this affect us? [...]\n| Certain people will be tempted by a commercial SQL server, while others\n| will prefer us because of:\n| \n| \tfeatures\n\nSorry, but I just don't buy this at the moment, for several reasons.\n\nDon't get me wrong. I like PostgreSQL, and think it could *eventually* kick\nbutt, but (as always, IMHO) it's Not Ready for Prime Time yet, not by a long\nshot. Let's look at some of the most problematic issues at the moment:\n\n *\tNo foreign keys.\n\n\tThis is a real kicker for a lot of people. Foreign keys are a big data\n\tintegrity issue. Fortunately, you can get around these with triggers,\n\tbut:\n\n *\tNo SQL-based triggers.\n\n\tTriggers have to be written in C, and this is a big showstopper for a\n\tlot of people.\n\n *\tNo OUTER JOIN (left or right).\n\n\tYes, you can simulate some of these with various UNION operators, but\n\tit's definitely off the SQL mainstream.\n\n *\t32-bit OIDs.\n\n\tThis pretty much takes PostgreSQL out of the running for large database\n\tprojects.\n\n *\tHard-to-grok source code.\n\n\tOpen source is great, but PostgreSQL source code still has great swaths\n\tof uncommented stretches of code, and that makes it much more difficult\n\tto do things like add esoteric types, or even extend the functionality\n\tof existing types. I recognize that most of this is because it's still\n\tan amalgam of Postgres with the new stuff, but for PostgreSQL source to\n\tbe a \"selling point\" of the software, it has to make the job of adding\n\ttypes and functionality *much* easier rather than merely possible.\n\nThere are a wide array of other issues, too; the simplistic security, view\nlimitations, administrational problems (eventually, for example, vacuum should\nbe unnecessary), analysis issues, replication issues, cross-server database\nissues, index limitations, the lack of a good front end designer, the lack of a\ngood report designer, locking issues, and so on.\n\nAs I said, I like PostgreSQL. It could eventually be a serious competitor to\nOracle. I'd love to see it do so. But this news of commercial competitors\nwill certainly eat away at a good portion of PostgreSQL's commercial customers,\nand I can't see PostgreSQL reversing that trend unless 6.5 is a major leap\nforward.\n\n\t\t\t\t\t\t---Ken McGlothlen\n\t\t\t\t\t\t [email protected]\n",
"msg_date": "Tue, 21 Jul 1998 10:32:47 -0700 (PDT)",
"msg_from": "Ken McGlothlen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> \n> | OK, let's discuss this. How does this affect us? [...]\n> | Certain people will be tempted by a commercial SQL server, while others\n> | will prefer us because of:\n> | \n> | \tfeatures\n> \n> Sorry, but I just don't buy this at the moment, for several reasons.\n> \n> Don't get me wrong. I like PostgreSQL, and think it could *eventually* kick\n> butt, but (as always, IMHO) it's Not Ready for Prime Time yet, not by a long\n> shot. Let's look at some of the most problematic issues at the moment:\n> \n> *\tNo foreign keys.\n> \n> \tThis is a real kicker for a lot of people. Foreign keys are a big data\n> \tintegrity issue. Fortunately, you can get around these with triggers,\n> \tbut:\n> \n> *\tNo SQL-based triggers.\n> \n> \tTriggers have to be written in C, and this is a big showstopper for a\n> \tlot of people.\n> \n> *\tNo OUTER JOIN (left or right).\n> \n> \tYes, you can simulate some of these with various UNION operators, but\n> \tit's definitely off the SQL mainstream.\n> \n> *\t32-bit OIDs.\n> \n> \tThis pretty much takes PostgreSQL out of the running for large database\n> \tprojects.\n> \n> *\tHard-to-grok source code.\n> \n> \tOpen source is great, but PostgreSQL source code still has great swaths\n> \tof uncommented stretches of code, and that makes it much more difficult\n> \tto do things like add esoteric types, or even extend the functionality\n> \tof existing types. I recognize that most of this is because it's still\n> \tan amalgam of Postgres with the new stuff, but for PostgreSQL source to\n> \tbe a \"selling point\" of the software, it has to make the job of adding\n> \ttypes and functionality *much* easier rather than merely possible.\n> \n> There are a wide array of other issues, too; the simplistic security, view\n> limitations, administrational problems (eventually, for example, vacuum should\n> be unnecessary), analysis issues, replication issues, cross-server database\n> issues, index limitations, the lack of a good front end designer, the lack of a\n> good report designer, locking issues, and so on.\n> \n> As I said, I like PostgreSQL. It could eventually be a serious competitor to\n> Oracle. I'd love to see it do so. But this news of commercial competitors\n> will certainly eat away at a good portion of PostgreSQL's commercial customers,\n> and I can't see PostgreSQL reversing that trend unless 6.5 is a major leap\n> forward.\n\nYou bring up some very good points here.\n\nConsider what we are doing. Commercial database vendors have teams of\nfull-time programmers, adding features to their databases, while we have\na volunteer group of part-time developers.\n\nMany of the missing items you mention were only added to commercial\ndatabases several years ago. Our database only just added subselects,\nwhich they had years ago. Hard to imagine how we can keep up with\ncommercial systems. Fortunately, we have many features they don't have,\nwhich we inherited from Berkeley.\n\nActually, a database server sits on the software complexity scale just\nbelow compilers and OS kernels. This is not easy stuff. \n\nAs far as our source code, I think it is very clean. I have made it a\npersonal project of mine to make it clear, so other people can\nunderstand it and hence contribute. I know our code is cleaner than\nMySQL, and I would guess it is cleaner than many of the commercial SQL\nengines. Our www site has a new \"How PostgreSQL Processes a Query\" paper\nin the documentation section, that explains the basics of how the backend\nworks.\n\nSo where does that leave us. We are open source, and those running\nLinux, FreeBSD, etc. already have chosen open software, so we have an\nadvantage there.\n\nWe clearly are the most advanced \"open source\" database around. We now\nhave \"closed source\" competition. How do we meet that challenge?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 15:30:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "That is my case:\n\nWe have an Sun Ultra Sparc acting as a server for ~ 90 Pc runing M$ Dos or\nWindows, almost all playing with a CAD program. I and a few other people\ntake care of the whole thing.\n\nWe need a SQL server but it is very hard for us to have approved a budget\nof thousands of dollars to buy, traning and mantain a program like Informix\nor Oracle to run in our server when we have to buy computers and programs\nthat runs CAD to allow ours engineers to work.\n\nSo PostgreSQL realley save my life. It runs very well at Sun, I have a very\ngood support from all of you and I do not need all the stuff Oracle or\nInformix offers.\n\nI am now makeing a program that controls all our project files (more than\n50.000) that are acessed by people that works where.(It was based in DBF\nfiles). And the files and data will be accessible inside our office or\noutside through browsers (CGI etc ...).\n\nI will port a big calc program that will store all data into PostgreSQL.\n\nI see PostgreSQL not only as a program for PC runing Linux but also as a\nvery good alternative for all unix box.\n\nRoberto\n>\n>OK, let's discuss this. How does this affect us? With all three\n>releasing around the same time, they really dilute themselves. I can't\n>imagine most people trying more than one of the commercial alternatives.\n>\n\n------------------------------------------------------------------\nEng. Roberto João Lopes Garcia E-mail: [email protected]\nF. 55 11 848 9906 FAX 55 11 848 9955\n\nMHA Engenharia Ltda\nE-mail: [email protected] WWW: http://www.mha.com.br\n\nAv Maia Coelho Aguiar, 215 Bloco D 2 Andar\nCentro Empresarial de Sao Paulo\nSao Paulo - BRASIL - 05805 000\n-------------------------------------------------------------------\n\n",
"msg_date": "Tue, 21 Jul 1998 17:57:49 -0200",
"msg_from": "Roberto Joao Lopes Garcia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix\n\ton Linux]"
},
{
"msg_contents": "[email protected] (Bruce Momjian) writes:\n\n| Consider what we are doing. Commercial database vendors have teams of\n| full-time programmers, adding features to their databases, while we have a\n| volunteer group of part-time developers.\n\nOh! I'd never *dream* of maligning the coders working on PostgreSQL. For a\nvolunteer grass-roots effort, PostgreSQL is a paragon of virtue---one of the\nreasons I like it. And writing complex database packages of this sort isn't\nexactly chimp-stuff, either---I think any of us would vouch for that.\n\nUltimately, the crux of the matter is this: who are we *targeting* as our\ncompetition? If we're looking at the mSQL and mySQL camp, clearly PostgreSQL\nstomps them both, from both the SQL support side and the data-security side.\n(And yes, I'd agree that the code is *ever* so much neater than MySQL.)\n\nBut if we're trying to position ourselves as a viable alternative to the big\ncommercial ones, such as Oracle and Informix and Sybase and MS SQL Server, we\nneed to work on a lot of issues. Open source is perceived in the business\ncommunity as a big risk, and not a benefit. Even today, someone said to me,\n\"Oh, that's all we need, some Linux guru spending three or four hours on\ncompiling a new kernel rather than attending to his actual duties.\" (Yes, I'll\nbe the first to admit that it was a stupid statement, but as a consultant, I\ncan't just say, \"What a stupid statement.\" It takes time to win over people\nlike this; you have to throw a product at them that makes them go, \"Geez, that\nwas cool, and it saved us a lot of time and money.\")\n\n| Fortunately, we have many features they don't have, which we inherited from\n| Berkeley.\n\nYes. But at the moment, they have a bunch of *fundamental* features that we\ndon't have. That's what worries me as far as general acceptance of PostgreSQL\nby the business community.\n\n| I have made it a personal project of mine to make it clear, so other people\n| can understand it and hence contribute.\n\nA lot more could be done. More comments. Breaking out individual datatypes\ninto their own modules (ready-made templates for new types that require\nimplementation in C!). But to your (and others') credit, it's gotten quite a\nbit cleaner just in the last year.\n\n| We clearly are the most advanced \"open source\" database around. We now\n| have \"closed source\" competition. How do we meet that challenge?\n\nIf we can clear up some of the glaring lackings in PostgreSQL by year-end, I\nthink it'll've been met pretty well.\n\n",
"msg_date": "Tue, 21 Jul 1998 14:03:00 -0700 (PDT)",
"msg_from": "Ken McGlothlen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> \n> Oracle now comes along and says that it is going to have a\n> Linux-binary distribution available. So? How much is that binary going\n> to cost? And what sort of licensing is provided?\n-- \n\nWhat version of Linux? What Platform ? Full featured? \n\nDon't kid yourselves about Oracle. Take it from someone who participates\non a Linux Mailing list also: There are countless versions of Linux out\nthere, running on every platform ever invented. Oracle would have to\nrelease source code ( ha ha) to be a true linux port. I run LinuxPPC on\na power mac, and if they port to this then I will eat a huge plate of\ncrow. \n\n\n\n-----------------------------------------------------------------\n|John Dzilvelis |\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 22 Jul 1998 09:34:40 +0000",
"msg_from": "JohnDz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Tue, 21 Jul 1998, Bruce Tong wrote:\n\n> I say this with these (mostly uninformed) assumptions in mind. Oracle's\n> ODBC driver is probably more complete. Oracle is better documented. Oracle\n> has a lot of related tools. Oracle offers training.\n\n\tWhat does Oracle's ODBC driver offer that ours currently doesn't?\n\tHave you looked at recent documentation? It has changes\n\t\tdramatically over the past couple of months...\n\tWhat do you mean by \"related tools\"?\n\tTraining in...administration? We run it at my \"real job\", and\n\t\tOracle *has* to offer training for administration...its a \n\t\tnightmare.\n\n> > \tfeatures\n> \n> As many posts I see to this list are \"how do I do this\" - \"not\n> implemented, wait for a later version\", I'm not sure why you would make\n> this claim. Again, I'm not a person who spends a great deal of time on\n> databases and I do consider myself uninformed.\n\n\tfeatures != ANSI SQL compliance, right? Again, what are we\nmissing that Oracle currently has...?\n\n> > \tsupport\n> \n> The mailing lists are nice. I appreciate them very much. There's probably\n> a mailing list for Oracle. What more is there for support?\n\n\tMy experience with paid support vs mailings lists tends to have me\nmuch preferring mailing lists. At least on a mailing list, you have a\ngood chance of finding someone that has already hit that same problem...\n\n",
"msg_date": "Wed, 22 Jul 1998 08:22:37 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Tue, 21 Jul 1998, Ken McGlothlen wrote:\n\n> There are a wide array of other issues, too; the simplistic security,\n> view limitations, administrational problems (eventually, for example,\n> vacuum should be unnecessary), analysis issues, replication issues,\n> cross-server database issues, index limitations, the lack of a good\n> front end designer, the lack of a good report designer, locking issues,\n> and so on. \n\n\tAlot of good points here, and some not so good...last I checked,\nvacuum was still required for Oracle, no? Its been awhile since I've\nlooked at it from a DBA perspective, so this may no longer be the case...\n\n\tAs for 'front end and report designers'...there are several of\nthem out there currently, most, from what I've seen, *look* good:\n\n\tMPSQL: http://troubador.com/~keidav/images/screenshots/sot.jpg\n\tMPMGR: http://troubador.com/~keidav/mpmgr.html\n\t\t- if nobody has checked out the screenshots on this, \n\t\t check it out\n\tEARPII: http://www.oswego.edu/~ddougher/EARP2\n\tPGAccess: http://www.flex.ro/pgaccess\n\t\t- does Forms, Reports and Scripts\n\tPGAdmin: http://www.vale-housing.co.uk/it/software\n\t\t- no screenshots, unfortunately :(\n\tGtkSQL: http://www.mygale.org/~bbrox/GtkSQL\n\tKPGsql: http://home.primus.baynet.de/mgeisler/kpgsql\n\t\t- KDE frontend\n\n\tIf there are features within those that you feel are missing, talk\nto the authors, offer to help...\n\n\tWhat I'd like to see, though, is a detailed version of your list\nabove. For instance, what locking issues? Low-level locking that Vadim\nis working on for v6.4? What analysis issues? If we could get the list\nabove with explanations of each, then Bruce can add them to the TODO list.\nWithout explanations, some, if not all, will sit there forever since\nnobody will understand *what* is being asked :)\n\n\tSome of them might be small, no brainer additions that nobody\nthought about...*shrug*\n\n\n",
"msg_date": "Wed, 22 Jul 1998 08:47:27 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Tue, 21 Jul 1998, Bruce Momjian wrote:\n\n> We clearly are the most advanced \"open source\" database around. We now\n> have \"closed source\" competition. How do we meet that challenge?\n\n\tYou want an honest answer? We don't. Or, at least, we don't\nthink of it as meeting a challenge.\n\n\tWe've spent the past, what, 2 years now, building PostgreSQL up to\nsomething that we (the developers) are proud to work with and support, and\nare confident in both using, and promoting for use, in real, production\nenvironments.\n\n\tOracle now comes along and says that it is going to have a\nLinux-binary distribution available. So? How much is that binary going\nto cost? And what sort of licensing is provided?\n\n\tHow many ppl are going to flock to Oracle because all of a sudden\nthey have a Linux port of it? I just checked their list of 'supported\nplatforms', and here at the University, we run almost a half a dozen of\nthem (Win95, WinNT, Solaris x86, Sparc/Solaris, Netware)...its not as if I\ndon't have a machine that I can pay the same price for Oracle and run it\non them...\n\n\tContinue our trend...continuing listening to the ppl asking for\nvarious \"reasonable\" features and working towards providing them. I\nsupport free/open software because, IMHO, the software is generally better\nwritten, and more featured, because those that are developing it are doing\nso because they *enjoy* what they are doing, they have a passion for\nit...not because some large company is paying them to do it.\n\n\tIMHO, the most important thing that is happening right now is\nVadim's work at getting LLL in place for v6.4. To me, that is as\nimportant, if not more so, in a 'multi-user, concurrent' system as\ntransactions are, as on a multi-user system, it would be a performance\nincrease due to less ppl having to wait to make changes...\n\n\tI would like to see Ken's list of missing items expanded with\nexplanations and added to the TODO list, as appropriate, since I think he\nbrought up alot of good points, but I think that \"panick'ng\" because\nOracle has announced an upcoming release of a Linux binary is\ncounter-productive...\n\n\n\n\n",
"msg_date": "Wed, 22 Jul 1998 09:08:50 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "My comments are driven by perceptions. I admit they're uninformed. The\ntopic is advertising PostgreSQL, so my perceptions are relevant. Educate\nme and the masses about your product. I'm hear because I think PostgreSQL\nis a useful tool.\n\n> > I say this with these (mostly uninformed) assumptions in mind. Oracle's\n> > ODBC driver is probably more complete. Oracle is better documented. Oracle\n> > has a lot of related tools. Oracle offers training.\n> \n> What does Oracle's ODBC driver offer that ours currently doesn't?\n\nI just tried it for the first time last week. It failed to perform a \nsimple query. I need to double check my work yet. The Oracle ODBC driver\nhas _probably_ been around for a while and has _probably_ been better\ntested perhaps simply by raw numbers of users.\n\n> Have you looked at recent documentation? It has changed\n> dramatically over the past couple of months...\n\nI like to think I check your docs regularly, but I'm sure there's stuff I\nmiss. From my experience documentation is examples, HOWTO's, web sites, \nand man pages which are all good approaches. The trouble is there is no\nplace which coordinates this. Searches tend to be a brute force effort for\nme because I do not yet understand how the material is organized. I'm sure\nif you've been around PostgreSQL for a couple of years you know the sorts\nof things to expect to find in the man pages. To me, I never would have\nthought to search the man pages for GRANT and REVOKE, or any SQL for that\nmatter.\n\nIn fact, documentation is probably the only place I can help your\ndevelopment effort at this time since I cannot see the big picture. Hence,\nthe journal I'm keeping could be turned into a tutorial, which I suppose\nit actually my goal.\n\n> What do you mean by \"related tools\"?\n\nGood question. What is Oracle Power Objects? What is Oracle/2000? I see\nthese things advertised. What do they do, and is an equivalent available\nfor PostgreSQL assuming it is a relavent product?\n\n> Training in...administration? We run it at my \"real job\", and\n> Oracle *has* to offer training for administration...its a \n> nightmare.\n\nAdministration, yes.\n\n> > > \tfeatures\n> > \n> > As many posts I see to this list are \"how do I do this\" - \"not\n> > implemented, wait for a later version\", I'm not sure why you would make\n> > this claim. Again, I'm not a person who spends a great deal of time on\n> > databases and I do consider myself uninformed.\n> \n> features != ANSI SQL compliance, right?\n\nI suppose ANSI SQL is the heart of it.\n\n> Again, what are we missing that Oracle currently has...?\n\nIf you offer the same features, then list those features in a comparison\non your web site. Take a \"See... we do everything Oracle does.\"\n\n> > > \tsupport\n> > \n> > The mailing lists are nice. I appreciate them very much. There's probably\n> > a mailing list for Oracle. What more is there for support?\n> \n> \tMy experience with paid support vs mailings lists tends to have me\n> much preferring mailing lists. At least on a mailing list, you have a\n> good chance of finding someone that has already hit that same problem.\n\nThat's my experience too. Notice I didn't mention paid support. My point\nhere is if there's a list for Oracle, then you are the same in this\ncategory.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n",
"msg_date": "Wed, 22 Jul 1998 09:56:30 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Wed, 22 Jul 1998, Bruce Tong wrote:\n\n> My comments are driven by perceptions. I admit they're uninformed. The\n> topic is advertising PostgreSQL, so my perceptions are relevant. Educate\n> me and the masses about your product. I'm hear because I think PostgreSQL\n> is a useful tool.\n\n\tPerceptions from the 'admittedly uninformed' helps...:)\n\n> > > I say this with these (mostly uninformed) assumptions in mind. Oracle's\n> > > ODBC driver is probably more complete. Oracle is better documented. Oracle\n> > > has a lot of related tools. Oracle offers training.\n> > \n> > What does Oracle's ODBC driver offer that ours currently doesn't?\n> \n> I just tried it for the first time last week. It failed to perform a \n> simple query. I need to double check my work yet. The Oracle ODBC driver\n> has _probably_ been around for a while and has _probably_ been better\n> tested perhaps simply by raw numbers of users.\n\n\tHave you mentioned this on [email protected]? David\nand Bryon are both very vocal over there, and are quick to pop up to help\nthose using the ODBC drivers, as they are the ones that are developing it.\n\n> > Have you looked at recent documentation? It has changed\n> > dramatically over the past couple of months...\n> \n> I like to think I check your docs regularly, but I'm sure there's stuff I\n> miss. From my experience documentation is examples, HOWTO's, web sites, \n> and man pages which are all good approaches. The trouble is there is no\n> place which coordinates this. Searches tend to be a brute force effort for\n> me because I do not yet understand how the material is organized. I'm sure\n> if you've been around PostgreSQL for a couple of years you know the sorts\n> of things to expect to find in the man pages. To me, I never would have\n> thought to search the man pages for GRANT and REVOKE, or any SQL for that\n> matter.\n> \n> In fact, documentation is probably the only place I can help your\n> development effort at this time since I cannot see the big picture. Hence,\n> the journal I'm keeping could be turned into a tutorial, which I suppose\n> it actually my goal.\n\n\tAny comments, opinions or suggested changes is welcome...are you\non the pgsql-docs mailing list? \n\n\tAs for your perception of the documentation, have you checked out:\n\n\t\thttp://www.postgresql.org/docs\n\n\trecently? I've recently done a major cleanup of it so that the\nlinks there are presented a little more clearly, but there are 5\nguide/manuals listed right at the top that you might find sligthly more\ninformative those docs you list above...\n\n > > > What do you mean by \"related tools\"?\n> \n> Good question. What is Oracle Power Objects? What is Oracle/2000? I see\n> these things advertised. What do they do, and is an equivalent available\n> for PostgreSQL assuming it is a relavent product?\n\n\tI don't know, can't help you there...I don't use Oracle, so\nsomeone with experience in that area will have to pop up and help :)\n\n> > Training in...administration? We run it at my \"real job\", and\n> > Oracle *has* to offer training for administration...its a \n> > nightmare.\n> \n> Administration, yes.\n\n\tSo far, my experience with PostgreSQL has been that\n'administrative functions' tend to be few, but there is a Administrator's\nGuide that documents, I think, most of what you need to know. \n\n\tMy opinion, though, tends to be that I learn more from a book,\nthen from other ppl, except for clarification of what I've read...\n\n> > Again, what are we missing that Oracle currently has...?\n> \n> If you offer the same features, then list those features in a comparison\n> on your web site. Take a \"See... we do everything Oracle does.\n\n\tHasn't been updated in awhile, but see:\n\n\thttp://www.postgresql.org/comp-comparison.shtml\n\n> That's my experience too. Notice I didn't mention paid support. My point\n> here is if there's a list for Oracle, then you are the same in this\n> category.\n\n\tThat depends...we are only the same if you get similar support\nthrough the Oracle list as you do here...\n\n\n",
"msg_date": "Wed, 22 Jul 1998 10:08:26 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> \tMPSQL: http://troubador.com/~keidav/images/screenshots/sot.jpg\n> \tMPMGR: http://troubador.com/~keidav/mpmgr.html\n> \t\t- if nobody has checked out the screenshots on this, \n> \t\t check it out\n\nThis one is looking *sooo* cool. Anybody knows of a good toolkit the \nauthor can switch to? (he asks for suggestions on the page above). I think\nup till now it was motif based? Is lesstif already up to this kind of \nwork? Is it easier to switch from motif to gtk than to switch to qt?\n\n> \tEARPII: http://www.oswego.edu/~ddougher/EARP2\n> \tPGAccess: http://www.flex.ro/pgaccess\n> \t\t- does Forms, Reports and Scripts\n> \tPGAdmin: http://www.vale-housing.co.uk/it/software\n> \t\t- no screenshots, unfortunately :(\n> \tGtkSQL: http://www.mygale.org/~bbrox/GtkSQL\n> \tKPGsql: http://home.primus.baynet.de/mgeisler/kpgsql\n> \t\t- KDE frontend\n\nMaarten\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 22 Jul 1998 16:14:40 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "\n> > My experience with paid support vs mailings lists tends to have me\n> > much preferring mailing lists. At least on a mailing list, you have a\n> > good chance of finding someone that has already hit that same problem.\n>\n>\n\nActually, I tend to end up supporting the product for which I am trying to be\nsupported...not that that is bad ;-)\n\nActually, most of my problems are answered before they happen, because I am\nconstantly monitoring the list.\n\nAs far as documentation goes, I think that for the most part what is there is\ngood. Sometimes (and I realize I need to be more specific) it seems the very\nthing you are looking for you can't find; in the end that generally has been an\nissue of inexperience with SQL. It seems to me, though, that there needs to be\nsome sort of documentation that takes a beginner write through the whole system\nstep by step and never leaving out the gory details, explaining things piece by\npiece, until at the end of this the user has become an \"expert\". Again, I need to\nbe more specific, and as I mull over this I might be able to be that, but now, I\ndon't see documentation that is really designed to take someone who doesn't know\nsquat about SQL and get them to the point where they are \"experts\". Perhaps that\nis not PostgreSQL's problem, but it would be nice.\n\nOf course, if your write it, it doesn't mean they will read it ;-)\n\n...james\n\n\n",
"msg_date": "Wed, 22 Jul 1998 10:39:58 -0400",
"msg_from": "James Olin Oden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> > > What does Oracle's ODBC driver offer that ours currently doesn't?\n> > \n> > I just tried it for the first time last week. It failed to perform a \n> > simple query. I need to double check my work yet. The Oracle ODBC driver\n> > has _probably_ been around for a while and has _probably_ been better\n> > tested perhaps simply by raw numbers of users.\n> \n> \tHave you mentioned this on [email protected]? David\n> and Bryon are both very vocal over there, and are quick to pop up to help\n> those using the ODBC drivers, as they are the ones that are developing it.\n\nNope. I wanted to check my work first. Its my first attempt at using the\nODBC driver and MS-Access has changed (for the worse interface-wise) a lot\nsince v1.1. It may even have something to do with the way I've declared\nthe tables on the PostgreSQL side.\n\n[ Documentation ]\n\n> Any comments, opinions or suggested changes is welcome...are you\n> on the pgsql-docs mailing list?\n\nNo, but I will be shortly.\n\n> As for your perception of the documentation, have you [recently] checked\n> out:\n> \n> http://www.postgresql.org/docs\n\nIt's been a few weeks. I'll look again.\n\n> \tMy opinion, though, tends to be that I learn more from a book,\n> then from other ppl, except for clarification of what I've read...\n\nI too learn a lot from books. But on new subjects, a short class covering\nthe idea behind the technology really helps. A little theory goes a long\nway. I can figure out the \"How\" if I know the \"Why.\"\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n",
"msg_date": "Wed, 22 Jul 1998 10:54:35 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> Sometimes (and I realize I need to be more specific) it seems the very\n> thing you are looking for you can't find; in the end that generally has\n> been an issue of inexperience with SQL.\n\nExactly. I'm learning SQL and PostgreSQL at the same time and it is\nsometimes difficult for me to correctly assess what belongs with each. My\nrecent GRANT/REVOKE question was like this. I didn't think for a minute\nthat would be handled by SQL since databases were created and destroyed by\nPostgreSQL utilities.\n\n> It seems to me, though, that there needs to be some sort of \n> documentation that takes a beginner write through the whole system step\n> by step and never leaving out the gory details, explaining things piece\n> by piece, until at the end of this the user has become an \"expert\".\n\nYes! I'm the loan PostgreSQL user here. In fact, I'm the only person\nplaying with a database. I'm the (completely unqualified) \"expert\" in the\nbuilding.\n\n> Again, I need to be more specific, and as I mull over this I might be\n> able to be that, but now, I don't see documentation that is really \n> designed to take someone who doesn't know squat about SQL and get them\n> to the point where they are \"experts\". Perhaps that is not PostgreSQL's\n> problem, but it would be nice.\n> \n> Of course, if your write it, it doesn't mean they will read it ;-)\n\nTutorials get read. References get read if they're organized well enough\nwhere the answer is found within a minute or two. Somebody who doesn't\nknow the proper term still has to be able to find the answer. That's\ntricky, but those are the references which all people admire. Cross\nreferencing is essential. One of my favorite parts of the man pages is the\n\"See Also\" section. This is because I usually get in the ball park on the\nfirst try, but not exactly to the right page.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n",
"msg_date": "Wed, 22 Jul 1998 11:11:18 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 22 Jul 1998, Bruce Tong wrote:\n> \n> > My comments are driven by perceptions. I admit they're uninformed. The\n> > topic is advertising PostgreSQL, so my perceptions are relevant. Educate\n> > me and the masses about your product. I'm hear because I think PostgreSQL\n> > is a useful tool.\n> \n> Perceptions from the 'admittedly uninformed' helps...:)\n> \n> > > > I say this with these (mostly uninformed) assumptions in mind. Oracle's\n> > > > ODBC driver is probably more complete. Oracle is better documented. Oracle\n> > > > has a lot of related tools. Oracle offers training.\n> > >\n> > > What does Oracle's ODBC driver offer that ours currently doesn't?\n> >\n> > I just tried it for the first time last week. It failed to perform a\n> > simple query. I need to double check my work yet. The Oracle ODBC driver\n> > has _probably_ been around for a while and has _probably_ been better\n> > tested perhaps simply by raw numbers of users.\n> \n\nI bet he has the old PostODBC driver -OR- there is a configuration\nissue. \nIf the version of the driver he has begins with dot (like .21 or .30),\nits ancient.\nThe latest version of the odbc driver at\nhttp://www.insightdist.com/psqlodbc is 6.30.0247.\n\nAs long as we are mentioning the PostODBC, there is still a link under\nthe \n\"INFORMATION CENTRAL\"-->\"HOW TO\"-->\"INTERFACE DRIVERS FOR\nPostgreSQL\"-->ODBC Drivers for PostgreSQL.\n\nThis link takes you to \"sunsite.unc.edu\" which has outdated information\non it.\nThe ancient ODBC Drivers listed on this site are :\n\nstud1.tuwien.ac.at/~e9025461\nwww.MageNet.com/postodbc/DOC\n\nIs there anyway to correct this information and put the insight link on\nthere? \n\nFor that matter, is there anyway to point people who go to MageNet to\nthe right address, like a forward page? This is important because Web\nsearch engines (yahoo, alta-vista) will still send you to MageNet if you\nsearch on odbc and postgres.\n\nI wonder how many people go try the MageNet odbc driver and just give up\nand try another dbms without even sending any mail to the lists?\n\nPlease, I would really appreciate a response.\n\nByron\n",
"msg_date": "Wed, 22 Jul 1998 11:12:25 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Wed, 22 Jul 1998, James Olin Oden wrote:\n> As far as documentation goes, I think that for the most part what is there is\n> good. Sometimes (and I realize I need to be more specific) it seems the very\n> thing you are looking for you can't find; in the end that generally has been an\n> issue of inexperience with SQL.\n\nIt seems to me that the documentation assumes some knowledge of SQL.\nI don't know if this was intended or not, but if a new user DOESN'T\nknow anything about SQL, they are not going to learn it from the\nPostgreSQL manuals. Here are some basic examples:\n\nThe small section in the User's Manual on the SELECT command is\nextremely short and neither explains nor gives examples for the many\nbasic things you can do with SELECT. So, for instance, from the\ndocumentation, a new user will learn that he can select whatever\nfields he wants from a table and tell it to select only those\nrecords (tuples) which meet an exact criteria (suchandsuch <\n'soandso' AND blahblah = 'blah'). But let's say that he wants to\nselect NOT all records that contain only 'blah' in the blahblah\nfield, but rather, all records that have 'blah' ANYWHERE WITHIN the\nblahblah field? No where in the PostgreSQL documentation (that I\ncould find) will he be told that he can do \"blahblah LIKE '%blah%'\". \n\nSo now let's say he doesn't want it to be case sensative. Nowhere\nthat I could find do the manuals tell him that he can do \"blahblah\n~* 'blah'\". In fact, I didn't know that ~* even existed until\nsomeone on the list suggested I do a \"\\do\" in psql to get a list of\nall the operators. Do you see how it seems like that information is\nhidden down in an obscure help command in one program rather than\nbeing right there in the User's Manual? What the User's Manual needs\nis a nice long detailed description WITH A LOT OF EXAMPLES of the\nSELECT command. Instead it seems to just mention it in passing.\n\nNow the man pages suffer the same problem that the entire man page\nsystem suffers: it pretends to be an online representation of a\nprinted set of manuals, but it is missing one major feature of\nprinted manuals: A TABLE OF CONTENTS! Some of the man page info IS\nin the HTML docs, but I think EVERYTHING in the man pages should be\nin the HTML manuals, (possible better organized than man pages\nallow).\n\nMost of the sections in the manuals are simply too brief. Consider\nthe section in the Tutorial on Redirecting SELECT Queries. It\nexplains the idea as quickly as possible, gives ONE example, and is\ndone. This doesn't help new users much.\n\nI think you see my point. If I knew more about PostgreSQL and SQL in\ngeneral, I'd offer to write some, but I'm just in the learning\nprocess now.\n\n Cheers.\n --Dan D.\n\n-----------------------------------------------------------------------\n Daniel G. Delaney The Louisville Times Chorus\n [email protected] www.LouisvilleTimes.org\n www.Dionysia.org/~dionysos/ Dionysia Design\n ICQ Number: 8171285 www.Dionysia.com/design/\n-----------------------------------------------------------------------\n I doubt, therefore I might be.\n\n",
"msg_date": "Wed, 22 Jul 1998 11:35:54 -0400 (EDT)",
"msg_from": "Dan Delaney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> On Wed, 22 Jul 1998, James Olin Oden wrote:\n> > As far as documentation goes, I think that for the most part what is there is\n> > good. Sometimes (and I realize I need to be more specific) it seems the very\n> > thing you are looking for you can't find; in the end that generally has been an\n> > issue of inexperience with SQL.\n> \n> It seems to me that the documentation assumes some knowledge of SQL.\n> I don't know if this was intended or not, but if a new user DOESN'T\n> know anything about SQL, they are not going to learn it from the\n> PostgreSQL manuals. Here are some basic examples:\n> \n> The small section in the User's Manual on the SELECT command is\n> extremely short and neither explains nor gives examples for the many\n> basic things you can do with SELECT. So, for instance, from the\n> documentation, a new user will learn that he can select whatever\n> fields he wants from a table and tell it to select only those\n> records (tuples) which meet an exact criteria (suchandsuch <\n> 'soandso' AND blahblah = 'blah'). But let's say that he wants to\n> select NOT all records that contain only 'blah' in the blahblah\n> field, but rather, all records that have 'blah' ANYWHERE WITHIN the\n> blahblah field? No where in the PostgreSQL documentation (that I\n> could find) will he be told that he can do \"blahblah LIKE '%blah%'\". \n> \n> So now let's say he doesn't want it to be case sensative. Nowhere\n> that I could find do the manuals tell him that he can do \"blahblah\n> ~* 'blah'\". In fact, I didn't know that ~* even existed until\n> someone on the list suggested I do a \"\\do\" in psql to get a list of\n> all the operators. Do you see how it seems like that information is\n> hidden down in an obscure help command in one program rather than\n> being right there in the User's Manual? What the User's Manual needs\n> is a nice long detailed description WITH A LOT OF EXAMPLES of the\n> SELECT command. Instead it seems to just mention it in passing.\n> \n> Now the man pages suffer the same problem that the entire man page\n> system suffers: it pretends to be an online representation of a\n> printed set of manuals, but it is missing one major feature of\n> printed manuals: A TABLE OF CONTENTS! Some of the man page info IS\n> in the HTML docs, but I think EVERYTHING in the man pages should be\n> in the HTML manuals, (possible better organized than man pages\n> allow).\n> \n> Most of the sections in the manuals are simply too brief. Consider\n> the section in the Tutorial on Redirecting SELECT Queries. It\n> explains the idea as quickly as possible, gives ONE example, and is\n> done. This doesn't help new users much.\n> \n> I think you see my point. If I knew more about PostgreSQL and SQL in\n> general, I'd offer to write some, but I'm just in the learning\n> process now.\n> \n\nGood points. The only comment I have is that the FAQ does now point to\nseveral SQL tuturials on the web, and the psql \\d commands are mentioned\nas ways to find information about the system. We don't list them in the\nmanual because they are always changing, because we are a\nuser-extensible system. Every release has new types, so we just tell\npeople to use the \\d commands. I just improved them for 6.4 so the\noutput is clearer.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 11:58:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Wed, 22 Jul 1998, The Hermit Hacker wrote:\n\n> \tOracle now comes along and says that it is going to have a\n> Linux-binary distribution available. So? How much is that binary going\n> to cost? And what sort of licensing is provided?\n\nI think PostgreSQL will continue on as much as before, just as Linux is \ncontinuing to put some competition to NT, because of its low cost and \nflexibility. Certainly many people will flock to Oracle, perhaps by \ncorporate pressure, perhaps for the support or interoperability with \nother Oracle servers. But it's not going to kill off PostgreSQL.\n\nI'm happy that Oracle is being ported to Linux. I'll probably never use \nOracle on Linux, but I think it will help get Linux wider recognition in \nthe enterprise environment.\n\nAnd, how many 'supported platforms' of Oracle also support PostgreSQL? \nPostgreSQL isn't a Linux only server.\n\n> \tContinue our trend...continuing listening to the ppl asking for\n> various \"reasonable\" features and working towards providing them. I\n> support free/open software because, IMHO, the software is generally better\n> written, and more featured, because those that are developing it are doing\n> so because they *enjoy* what they are doing, they have a passion for\n> it...not because some large company is paying them to do it.\n\nA good point.\n\nBrett W. McCoy \n http://www.lan2wan.com/~bmccoy\n-----------------------------------------------------------------------\n\"The Number of UNIX installations has grown to 10, with more expected.\"\n -- The UNIX Programmer's Manual, 2nd Edition, June, 1972\n\n",
"msg_date": "Wed, 22 Jul 1998 12:00:28 -0400 (EDT)",
"msg_from": "\"Brett W. McCoy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> > > > What does Oracle's ODBC driver offer that ours currently doesn't?\n> > >\n> > > I just tried it for the first time last week. It failed to perform a\n> > > simple query. I need to double check my work yet. The Oracle ODBC driver\n> > > has _probably_ been around for a while and has _probably_ been better\n> > > tested perhaps simply by raw numbers of users.\n> \n> I bet he has the old PostODBC driver -OR- there is a configuration\n> issue. If the version of the driver he has begins with dot (like .21\n> or .30), its ancient.\n\n> The latest version of the odbc driver at\n> http://www.insightdist.com/psqlodbc is 6.30.0247.\n\nWhile I was initially fooled by the old driver months ago, I have\nv06-30-0247 according to the installer, postdrv.exe. I got this from the\nweb site you quoted.\n\nI'm fairly certain I can recreate the circumstances, and I'll try to do so\ntoday. I assume the \"Interfaces\" list is the place for this discussion.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n",
"msg_date": "Wed, 22 Jul 1998 12:23:54 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "[email protected] (The Hermit Hacker) writes:\n\n| Alot of good points here, and some not so good...last I checked, vacuum was\n| still required for Oracle, no?\n\nDoes Oracle even have a vacuum? There's the COELESCE command, but it's hardly\n*necessary*.\n\n| As for 'front end and report designers'...there are several of them out there\n| currently, most, from what I've seen, *look* good.\n\nA lot of them \"look good\" at first glance. The problem seems to be that the\nimplementations tend to be spotty and incomplete amongst the packages I've\nlooked at. None of them are robust or complete enough for most commercial use.\n\n| If there are features within those that you feel are missing, talk to the\n| authors, offer to help...\n\nI'm only speaking from one viewpoint: is the product something I can recommend\nfor commercial use to my customers in the same breath as Oracle or Informix?\nWould *I* use it, personally? Of course; I like it, and don't mind getting my\nhands dirty. But most companies would balk. They aren't balking at Linux or\nFreeBSD, nor are they balking at Apache, so it's not just an avoidance of\nopen-source software. They *would* balk at the lack of features, in spite of\nPostgreSQL's cool stuff, and they'd also balk at the lack of facilities, and\nthey'll *really* balk on the stability issues.\n\n| What I'd like to see, though, is a detailed version of your list above. For\n| instance, what locking issues? Low-level locking that Vadim is working on\n| for v6.4?\n\nI'm not clear on the details of what Vadim is working on, but if it's page- or\nrow-level locking, that'd be it. However, it's hard to responsibly recommend\nsomething that hasn't been released yet. (Hasn't stopped Microsoft, but I try\nto be a bit more ethical than they are. :)\n\n| What analysis issues? If we could get the list above with explanations of\n| each, then Bruce can add them to the TODO list. Without explanations, some,\n| if not all, will sit there forever since nobody will understand *what* is\n| being asked :)\n\nConsider my wrist slapped. :)\n\nOne thing I think that would psychologically help is to quit comparing\nPostgreSQL with mSQL and MySQL. The m*twins are cute, toy databases, and I\nsuspect that the general perception is that PostgreSQL is already more serious\nthan either one of those. So enough with those comparisons. Let's start\nthinking about comparing PostgreSQL with its *real* competition: Oracle,\nSybase, SQL Server, Informix, and others.\n\n(Horrors! you say. \"They're commercial products, how can we compete?\" Apache\nstill has more than 50% of the web market, Linux and FreeBSD are serious\ncompetitors to Solaris and HPs. So we don't have millions of dollars for\nmarketing. So we don't have hundreds of developers to throw at a project. We\nhave something *else* they don't have: a bunch of middle-management\nbusiness-as-usual MBA-drones.)\n\nSo. Let's talk features. (Hey! www.postgresql.org is reporting \"Document\ncontains no data.\" How am I supposed to pull up the TODO list like this?)\n\nWell, I'm gonna be guessing here, so please pardon me.\n\nReliability: You don't need me to point out that a lot of work needs to be\ndone here. These issues are tough ones to counter. Why doesn't pg_dump\nactually preserve everything? (It's getting better, I know, but it's not there\nright now.) Why do you have to vacuum the database every night? Questions\nlike that are tough to answer to people's satisfaction, and that's without even\ngoing into things like memory leaks.\n\nCrucial basics: Views---they desperately need fixing up. Foreign keys,\nconstraints, and SQL-language triggers are critical as well. I think HAVING,\nOUTER and INTERSECTS are being worked on. Temporary tables---are those being\nworked on? Yes, I know, most of these are on the TODO list already, but their\ncurrent state of nonbeing is keenly felt, and hinders the cause quite a bit.\n\nThe draws: These are the things that should be distinguishing PostgreSQL from\nthe rest of the pack. The source code is a big draw, but it's still hard to\ngrok. A concerted effort should be going on to document the code itself.\nBreaking out built-in types into their own easy-to-locate files would also be\ngood, too; I had to work to find out how the box functions were defined, where\nit would have been better to have a built-in-types directory with a file in\nthere named box.c, for example, with the data representation and the function\nsource all neatly bundled---then it would be *easy* to use that as a template\nto come up with a different type. (Believe me, if datetime had had such a\nfile, coming up with the equivalent of strftime() for that would have been a\nwhole lot easier. As it is, I'm still trying to figure out how it's been\nimplemented with what time I have these days.) There's a lot of clarification\nthat could be done here as far as making it easy to add user-contributed stuff,\nwhich ultimately means that we can support more types---and that's a big draw.\n(Imagine a type called `earthpoint' consisting of latitude and longitude, and\narrange to have a bunch of the point operators work properly; you might have a\nnorthof function, and a westof function, and a distance function. Then you\nmight add `earthregion' which parallels the polygon type. So much for having\nto sell this product to cartographers. I'd love to create it, but right now, I\nwouldn't have a *clue* where to put it, or how to start. I might have the time\nto read the source tree once I reduce my project load to just two or three, but\nthat's not going to happen anytime soon.)\n\nWithout a lot of the crucial basics and reliability issues addressed,\nPostgreSQL is always going to be a big risk compared to Oracle et al, and\nbusinesses (especially IS managers) *hate* risk. Once those are taken care of,\nthe other features help sell the product, and we can start worry about things\nlike image and branding and a nice, polished corporate look and Kerberos\nsupport and other frippery like that. :)\n\n(Which reminds me. Is anyone interested in a rework of the PostgreSQL Program\nFlow diagram? My first rework is at\n\n\thttp://www.serv.net/~mcglk/postgresql.gif (30973 bytes)\n\thttp://www.serv.net/~mcglk/postgresql.jpg (41422 bytes)\n\n([Take your pick.] It's a little unclear, IMHO, so I came up with a second\ndraft at\n\n\thttp://www.serv.net/~mcglk/postgresql1.gif (56856 bytes)\n\thttp://www.serv.net/~mcglk/postgresql1.jpg (43292 bytes)\n\n(Use as you like, if you like.)\n\n\t\t\t\t\t\t\t---Ken\n",
"msg_date": "Wed, 22 Jul 1998 15:34:16 -0700 (PDT)",
"msg_from": "Ken McGlothlen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "\n\n\n>> Oracle now comes along and says that it is going to have a\n>> Linux-binary distribution available. So? How much is that binary\n>> going to cost? And what sort of licensing is provided?\n\nJohnDz> What version of Linux? What Platform ? Full featured?\n\nI was asking myself the same question.\n\n\n\nJohnDz> Don't kid yourselves about Oracle...\nJohnDz> There are countless versions of Linux out there, running on\nJohnDz> every platform ever invented.\n\nWithout starting a Linux-advocacy debate, I take exception to the\nabove statement :) I am under the impression that many of those Linux\nports are still rather experimental, and that there are still zillions\n(!) of architectures that /don't/ yet have a Linux port. Please\neducate me if I am wrong!\n\n\\begin{rant}\n\nOne of the reasons I work with NetBSD is that it has /stable/ ports to\nmore than a dozen different architectures (CPUs) and even more\nplatforms (types of machines)! I use two of them: i386 and MIPS/PMAX,\nwith a third in the running (waiting for some hardware): Mac68K.\n\nI realise that there are people running Linux on Intel-based palmtops,\nbut that sort of thing is also happening with both NetBSD and FreeBSD.\nFair enough, Linux 2.x might have had much of the i386-specific guts\nof Linux 1.x ripped out of it and replaced with more portable innards,\nbut NetBSD was built with that portability and code cleanliness from\nthe ground up! Having looked at the source code for large chunks of\nLinux and that for equivalent chunks of NetBSD, I know without a\nshadow of a doubt which way I choose to favour!\n\nEnough said, I really don't want to start a useless advocacy debate, I\nacknowledge that the Linux phenomenon is fabulous---every bit as\nfabulous as NetBSD's implementation---yet I feel that the rest of the\nfree UNIX community is neglected when Linux gets all the spotlight!\n\n\\end{rant}\n\nPostgreSQL runs quite nicely on NetBSD, thank you very much, though I\nhave not yet the time nor the requirement to stress it very much---I\n/did/ have a go with the embedded-SQL C preprocessor, and I conclude\nfrom that experience that in comparison against the embedded-SQL\npreprocessor for the InterBase product, ecpg produces very elegant C,\nand is, in many ways, a far nicer tool. On the other hand, the\nInterBase tool implements a few more facilities, and as the DB of\nchoice for the project on which I am working those facilities had\nprecedence over my developing against PostgreSQL :(\n\n\n\nJohnDz> Oracle would have to release source code ( ha ha) to be a true\nJohnDz> linux port. I run LinuxPPC on a power mac, and if they port to\nJohnDz> this then I will eat a huge plate of crow.\n\nHere, here!\n\n--Kevin.\n",
"msg_date": "Thu, 23 Jul 1998 12:55:49 +1000 (EST)",
"msg_from": "Kevin Cousins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> Exactly. I'm learning SQL and PostgreSQL at the same time and it is\n> sometimes difficult for me to correctly assess what belongs with each. My\n> recent GRANT/REVOKE question was like this. I didn't think for a minute\n> that would be handled by SQL since databases were created and destroyed by\n> PostgreSQL utilities.\n\nIn fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\ncreatedb and destroydb tools just call these SQL statements....\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Thu, 23 Jul 1998 11:01:55 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Wed, 22 Jul 1998, Ken McGlothlen wrote:\n\n> Does Oracle even have a vacuum? There's the COELESCE command, but it's hardly\n> *necessary*.\n\n\tI don't know, but I'll check at work tomorrow about this...and\nreply accordingly...\n\n> | As for 'front end and report designers'...there are several of them out there\n> | currently, most, from what I've seen, *look* good.\n> \n> A lot of them \"look good\" at first glance. The problem seems to be that\n> the implementations tend to be spotty and incomplete amongst the\n> packages I've looked at. None of them are robust or complete enough for\n> most commercial use. \n\n\tAnd you've, of course, discussed these failings with the authors\nof the software itself? Or did you do like most and just drop the\nsoftware as being incomplete?\n\n\tThe only person so far that I've had experience with, as far as\n'front-ends' are concerned, is Teo (PgAccess), who has been very\nresponsive to users requests for changes and improvements. I imagine the\nrest are similar in addressing requests, or, hell, make the improvement\nyourself and ask them to add it into their source tree for future\nreleases.\n\n\tIts an \"open software\" model...no one person is responsible in\nmaking it do what *you* want, except yourself. \n\n\tIts like a few weeks ago, I started playing with Xtrophy's ICQ\nclient. It was missing features that I wanted, so I worked through the\ncode and added them in myself...submitted patches to the authors, which\nthey've included in the new release.\n\n> | If there are features within those that you feel are missing, talk to the\n> | authors, offer to help...\n> \n> I'm only speaking from one viewpoint: is the product something I can\n> recommend for commercial use to my customers in the same breath as\n> Oracle or Informix? Would *I* use it, personally? Of course; I like\n> it, and don't mind getting my hands dirty. But most companies would\n> balk. They aren't balking at Linux or FreeBSD, nor are they balking at\n> Apache, so it's not just an avoidance of open-source software. They\n> *would* balk at the lack of features, in spite of PostgreSQL's cool\n> stuff, and they'd also balk at the lack of facilities, and they'll\n> *really* balk on the stability issues. \n\n\tfeatures are continuously being added and improved...how many\nyears has Oracle been working on it, and how much money have they sunk\ninto it? We've been going, what, 2 years now?\n\n\tLack of facilities? Front-end interfaces? They are out there, as\nI listed before...they might be missing features you feel are required,\nand I don't dispute that...but if everyone just writes them off, then the\nauthor's have no reason, or desire, to maintain them. Give the authors\nfeedback, offer them patches so that they don't have to work at adding\nstuff you want, but they don't need and feel is a priority yet...\n\n> I'm not clear on the details of what Vadim is working on, but if it's\n> page- or row-level locking, that'd be it. However, it's hard to\n> responsibly recommend something that hasn't been released yet. (Hasn't\n> stopped Microsoft, but I try to be a bit more ethical than they are. :) \n\n\tAs do we...how many ppl out there have, to date, been severely\nhampered by lack of 'row-level locking'? IMHO, row-level locking will\ngive us a speed improvement as ppl won't be as queued on their requests,\nbut I *think* that that is the major thing it will provide...\n\n> One thing I think that would psychologically help is to quit comparing\n> PostgreSQL with mSQL and MySQL. The m*twins are cute, toy databases,\n> and I suspect that the general perception is that PostgreSQL is already\n> more serious than either one of those. So enough with those\n> comparisons. Let's start thinking about comparing PostgreSQL with its\n> *real* competition: Oracle, Sybase, SQL Server, Informix, and others. \n\n\tI don't quite agree here...I think MySQL/mSQL are required in any\ncomparison, to show what we do have that they don't. They label\nthemselves an RDBMS, so I personally think that *not* including them would\nbe frowned upon by those looking at the comparison as being a slight.\n\n\tAs for the comparisons, hey haven't been updated since\nv6.2.1...I've asked once before, but is anyone actually interested in\nworking on updating and revising that?\n\n> (Horrors! you say. \"They're commercial products, how can we compete?\" \n> Apache still has more than 50% of the web market, Linux and FreeBSD are\n> serious competitors to Solaris and HPs. So we don't have millions of\n> dollars for marketing. So we don't have hundreds of developers to throw\n> at a project. We have something *else* they don't have: a bunch of\n> middle-management business-as-usual MBA-drones.) \n\n\tActually, IMHO, we have something that 'those commercial products'\ndon't have...a passion and a love for what we do, else we wouldn't be\ndoing it. Therefore, our code *tends* to be cleaner and more stable, as a\nresult...\n\n> Reliability: You don't need me to point out that a lot of work needs to\n> be done here. These issues are tough ones to counter. Why doesn't\n> pg_dump actually preserve everything? (It's getting better, I know, but\n> it's not there right now.) \n\n\tWhat currently isn't being preserved?\n\n> Why do you have to vacuum the database every\n> night? \n\n\tStatistics and database cleanups. Last I heard, there is working\nbeing done on removing the locks imposed by vacuum for doing the\nstatistics, and there is talk about doing work such that 'dead space'\nwhere deleted data resided is reused instead of sitting idle until the\nnext vacuum...\n\n> Questions like that are tough to answer to people's\n> satisfaction, and that's without even going into things like memory\n> leaks. \n\n\tWhat memory leaks? :) Actually, alot of work seems to go into\neach this aspect prior to each release, so this should be getting *alot*\nbetter...\n\n> Crucial basics: Views---they desperately need fixing up. Foreign keys,\n> constraints, and SQL-language triggers are critical as well. I think\n> HAVING, OUTER and INTERSECTS are being worked on. Temporary\n> tables---are those being worked on? Yes, I know, most of these are on\n> the TODO list already, but their current state of nonbeing is keenly\n> felt, and hinders the cause quite a bit. \n\n\tI keep meaning to work on this, but I'm going to look into getting\nPTS/Keystone installed on the server so that our TODO list can be slightly\nmore dynamic, where someone can claim and comment progress on the various\nareas...that might help some...\n\n> The draws: These are the things that should be distinguishing\n> PostgreSQL from the rest of the pack. The source code is a big draw,\n> but it's still hard to grok. A concerted effort should be going on to\n> document the code itself. \n\n\tBruce has been working on this as he goes along...I don't know if\nanyone else is helping him with it though...\n\n> Breaking out built-in types into their own\n> easy-to-locate files would also be good, too; I had to work to find out\n> how the box functions were defined, where it would have been better to\n> have a built-in-types directory with a file in there named box.c, for\n> example, with the data representation and the function source all neatly\n> bundled---then it would be *easy* to use that as a template to come up\n> with a different type. \n\n\tHave you looked into what it would take to do such?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 23 Jul 1998 22:53:17 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Thu, 23 Jul 1998, Maarten Boekhold wrote:\n\n> > Exactly. I'm learning SQL and PostgreSQL at the same time and it is\n> > sometimes difficult for me to correctly assess what belongs with each. My\n> > recent GRANT/REVOKE question was like this. I didn't think for a minute\n> > that would be handled by SQL since databases were created and destroyed by\n> > PostgreSQL utilities.\n> \n> In fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\n> createdb and destroydb tools just call these SQL statements....\n\n\tHere's an odd thought:\n\n\tLet's remove the \"I don't want to think\" utilities like\n{create,destroy}{db,user} and force DBA's to actually use the *proper*\nfunctions. \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 23 Jul 1998 22:56:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "[email protected] (The Hermit Hacker) writes:\n\n| > A lot of them \"look good\" at first glance. The problem seems to be that\n| > the implementations tend to be spotty and incomplete amongst the\n| > packages I've looked at. None of them are robust or complete enough for\n| > most commercial use. \n| \n| And you've, of course, discussed these failings with the authors of the\n| software itself? Or did you do like most and just drop the software as being\n| incomplete?\n\nUh . . . I'm not slighting the authors of the software, nor am I even slighting\nthe software itself. All I'm saying is that, as a consultant, I can't yet\nrecommend any for commercial use, and that hinders the adoption of PostgreSQL\nby commercial entities. That's all. I didn't say *anything* about whether *I*\nuse them or not. Nor did I say that the authors were unresponsive, or anything\nof the sort.\n\n| We've been going, what, 2 years now?\n\nHey, I freely confess that I'm feeling impatient. :)\n\n| [...] if everyone just writes them off, then the author's have no reason, or\n| desire, to maintain them.\n\nWhich is exactly what worries me. Businesses hire me, often looking to me to\nsave them money and/or time, and provide process improvement (whether that be\nnew applications, more reliability, whatever). Often, a free Unix variant will\nserve the purpose they're looking for---file server, print server, mail server,\nweb server, all stable services. But when the question of databases comes up,\nand they want something as stable and full-featured, I do something that\nfrustrates me: I tell the truth. \"Outer joins?\" \"No.\" \"Replication?\" \"No.\"\nAnd so on.\n\nAnd that's why I get impatient. PgSQL is *so* *close* to being something I can\nsay, \"Look, most of the stuff you *require* in Oracle, you can have for free,\nand look at some of these other features!\" But not yet.\n\n| They label themselves an RDBMS, so I personally think that *not* including\n| them would be frowned upon by those looking at the comparison as being a\n| slight.\n\nAh. That's a good point, and one I hadn't considered.\n\n| Have you looked into what it would take to do such? [types in separate files]\n\nA little. Scares the heck outta me. :)\n\n\t\t\t\t\t\t\t---Ken\n",
"msg_date": "Thu, 23 Jul 1998 20:53:37 -0700 (PDT)",
"msg_from": "Ken McGlothlen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "\n\nThe Hermit Hacker wrote:\n\n> On Wed, 22 Jul 1998, Ken McGlothlen wrote:\n>\n> > Does Oracle even have a vacuum? There's the COELESCE command, but it's hardly\n> > *necessary*.\n>\n\nNope.. Oracle has a background process which re-allocates free space..It does get\nfragmented, and the only real way to unfrag is to export (dump) and import.No Vacuum,\nat least on 7.3.2\n\n--\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nMichael - System Administrator Working in Cheap Canadian Dollars\nUnix Administration - WebSite Hosting - Network Services - Programming\nWizard Internet Services - TechnoWizard Computers - Wizard Tower TechnoServices\n------------------------------------------------------------------------------\n(604) 589-0037 Beautiful British Columbia, Canada\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\n\n",
"msg_date": "Thu, 23 Jul 1998 21:22:17 -0700",
"msg_from": "The Web Administrator <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Thu, 23 Jul 1998, The Hermit Hacker wrote:\n\n> On Thu, 23 Jul 1998, Maarten Boekhold wrote:\n> \n> > > Exactly. I'm learning SQL and PostgreSQL at the same time and it is\n> > > sometimes difficult for me to correctly assess what belongs with each. My\n> > > recent GRANT/REVOKE question was like this. I didn't think for a minute\n> > > that would be handled by SQL since databases were created and destroyed by\n> > > PostgreSQL utilities.\n> > \n> > In fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\n> > createdb and destroydb tools just call these SQL statements....\n> \n> \tHere's an odd thought:\n> \n> \tLet's remove the \"I don't want to think\" utilities like\n> {create,destroy}{db,user} and force DBA's to actually use the *proper*\n> functions. \n\nI'm all in favour.....\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Fri, 24 Jul 1998 09:46:08 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "On Thu, 23 Jul 1998, The Web Administrator wrote:\n\n> Nope.. Oracle has a background process which re-allocates free space..It does get\n> fragmented, and the only real way to unfrag is to export (dump) and import.No Vacuum,\n> at least on 7.3.2\n\n\tSo, essentially, our VACUUM command provides functionality that\nOracle *doesn't* have, right?\n\nMarc G. Fournier [email protected]\nSystems Administrator, Acadia University\n\n \"These are my opinions, which are not necessarily shared by my employer\"\n\n",
"msg_date": "Fri, 24 Jul 1998 08:28:38 -0300 (ADT)",
"msg_from": "Marc Fournier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
}
] |
[
{
"msg_contents": "Backends fetch 1024 XIDs now and place them in shmem.\nThere is space in VariableCache struct for OIDs as well\nbut I didn't change GetNewObjectId() due to the \nCheckMaxObjectId() stuff... Bruce ?\n\nAll other LLL stuff will be #ifdef-ed...\n\nVadim\n",
"msg_date": "Tue, 21 Jul 1998 14:30:43 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "next XID is in shmem now..."
},
{
"msg_contents": "> Backends fetch 1024 XIDs now and place them in shmem.\n> There is space in VariableCache struct for OIDs as well\n> but I didn't change GetNewObjectId() due to the \n> CheckMaxObjectId() stuff... Bruce ?\n\nWhat can I do to help? Is the problem that a backend can set the next\noid by specifiying an oid greater than the current one?\n\n> \n> All other LLL stuff will be #ifdef-ed...\n\nAs far as I am concerned, you don't need use #ifdef.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 10:27:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] next XID is in shmem now..."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Backends fetch 1024 XIDs now and place them in shmem.\n> > There is space in VariableCache struct for OIDs as well\n> > but I didn't change GetNewObjectId() due to the\n> > CheckMaxObjectId() stuff... Bruce ?\n> \n> What can I do to help? Is the problem that a backend can set the next\n> oid by specifiying an oid greater than the current one?\n\nNo problem - I just havn't time to think about this, sorry.\n\n> \n> >\n> > All other LLL stuff will be #ifdef-ed...\n> \n> As far as I am concerned, you don't need use #ifdef.\n\nI'm not sure how much ready/robust this will be in 6.4.\nThis is long-term project...\n\nVadim\n",
"msg_date": "Wed, 22 Jul 1998 00:37:42 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] next XID is in shmem now..."
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Backends fetch 1024 XIDs now and place them in shmem.\n> > > There is space in VariableCache struct for OIDs as well\n> > > but I didn't change GetNewObjectId() due to the\n> > > CheckMaxObjectId() stuff... Bruce ?\n> > \n> > What can I do to help? Is the problem that a backend can set the next\n> > oid by specifiying an oid greater than the current one?\n> \n> No problem - I just havn't time to think about this, sorry.\n> \n> > \n> > >\n> > > All other LLL stuff will be #ifdef-ed...\n> > \n> > As far as I am concerned, you don't need use #ifdef.\n> \n> I'm not sure how much ready/robust this will be in 6.4.\n> This is long-term project...\n\nAny chance on getting the 30-second pg_log syncing, so we can improve\nthe default pgsql performance, and not do fsync on every transaction by\ndefault?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 13:46:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] next XID is in shmem now..."
}
] |
[
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> wrote\n\n> > > Because we just create a unique index on a PRIMARY specification, I\n> > > think any unique index on a field shows it as primary.\n> > Hmm. Any chance we can somehow flag it as well? Perhaps a new bool\n> > field in pg_index the next time we do a dump & reload release? I\n> > assume we will need it eventually anyway.\n> \n> I'm not sure I understand all the issues, but if we can avoid\n> distinctions between different indices that would be A Good Thing. Since\n> multiple unique indices are allowed, what would be the extra\n> functionality of having one designated \"primary\"? Is it an arbitrary\n> SQL92-ism which fits with older databases, or something which enables\n> new and interesting stuff?\n\nCurrently the 'primary key' is distinguished by being named \n<table name>_pkey (at least this is what the warning sais ;),\nI'think this should be quite enough for most purposes.\n\nBTW, are there any operational differences (like not being able to \ndrop the index) in SQL92 that set primary key apart from other \nunique indexes ?\n\nFor example, can a foreign key constraint reference any key in the \nforeign table ?\n\nHannu\n",
"msg_date": "Tue, 21 Jul 1998 13:15:58 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Finding primary keys in a table"
}
] |
[
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n> \n> Doing complex stuff like indexing with contrib stuff is tricky, and one\n> reason we want to move stuff out of there as it becomes popular. It is\n> just too hard for someone not experienced with the code to implement. \n> Add to this the fact that the oid at the time of contrib installation\n> will change every time you install it, so it is even harder/impossible\n> to automate.\n\nWe should develop (or at least prominently promote _and_ document) some\nkind of file/package format or tool (maybe like illustra datablades), \nthat would standardize the layout of contrib types.\n\nAlso, the need to manually get oids is a real show-stopper. \nA short-time solution would be to develop functions that return these\noids,\nlike get_proc_oid_for(proc_name,arg1_type,arg2_type,...).\n\nThe real solution would of course be extending the (Postgre)SQL language \nto find the OIDs automatically, like Oracle currently does for its\nCOMMENT\nstatement, an equivalent of which could be used in PostgreSQL to insert \nvalues in pg_description on the fly.\n\nHannu\n",
"msg_date": "Tue, 21 Jul 1998 18:11:54 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Complexity of contrib types"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> wrote:\n> > \n> > Doing complex stuff like indexing with contrib stuff is tricky, and one\n> > reason we want to move stuff out of there as it becomes popular. It is\n> > just too hard for someone not experienced with the code to implement. \n> > Add to this the fact that the oid at the time of contrib installation\n> > will change every time you install it, so it is even harder/impossible\n> > to automate.\n> \n> We should develop (or at least prominently promote _and_ document) some\n> kind of file/package format or tool (maybe like illustra datablades), \n> that would standardize the layout of contrib types.\n> \n> Also, the need to manually get oids is a real show-stopper. \n> A short-time solution would be to develop functions that return these\n> oids,\n> like get_proc_oid_for(proc_name,arg1_type,arg2_type,...).\n\nCan't they SELECT from pg_proc?\n\n> \n> The real solution would of course be extending the (Postgre)SQL language \n> to find the OIDs automatically, like Oracle currently does for its\n> COMMENT\n> statement, an equivalent of which could be used in PostgreSQL to insert \n> values in pg_description on the fly.\n\nWe return oid's as part of an INSERT. Is that what you meant?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:12:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Complexity of contrib types"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> We should develop (or at least prominently promote _and_ document) some\n> kind of file/package format or tool (maybe like illustra datablades), \n> that would standardize the layout of contrib types.\n> Also, the need to manually get oids is a real show-stopper. \n\nYes. I've been thinking off and on about some homegrown data types\n(not general-purpose enough to be worthwhile even as contrib material).\nBut the admin overhead seems like a real pain. If we want to promote\nPostgres' type system as a major benefit, we ought to work harder at\nmaking it easy to add locally-defined types.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Jul 1998 11:23:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Complexity of contrib types "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <[email protected]> wrote:\n> > >\n> > > Doing complex stuff like indexing with contrib stuff is tricky, and one\n> > > reason we want to move stuff out of there as it becomes popular. It is\n> > > just too hard for someone not experienced with the code to implement.\n> > > Add to this the fact that the oid at the time of contrib installation\n> > > will change every time you install it, so it is even harder/impossible\n> > > to automate.\n> >\n> > We should develop (or at least prominently promote _and_ document) some\n> > kind of file/package format or tool (maybe like illustra datablades),\n> > that would standardize the layout of contrib types.\n> >\n> > Also, the need to manually get oids is a real show-stopper.\n> > A short-time solution would be to develop functions that return these\n> > oids,\n> > like get_proc_oid_for(proc_name,arg1_type,arg2_type,...).\n> \n> Can't they SELECT from pg_proc?\n\nMaking it a function would probably make the type-addition script\neasier.\n \n> >\n> > The real solution would of course be extending the (Postgre)SQL language\n> > to find the OIDs automatically, like Oracle currently does for its\n> > COMMENT\n> > statement, an equivalent of which could be used in PostgreSQL to insert\n> > values in pg_description on the fly.\n> \n> We return oid's as part of an INSERT. Is that what you meant?\n\nIt is very hard (probably impossible) to use them from a psql script.\n\nIf I remember the syntax right (have'nt used Oracle for >=2 years), \nI could do:\n\nCOMMENT \"this is a nice table\" ON TABLE nice_table;\nCOMMENT \"this is an unnecessary field from a nice table\"\n ON FIELD nice_table.unnecessary_field;\n\nOf course, to fully support it we would need a much improved foreign \nkey support, so that we could set the ON DELETE CASCADE for the \ncommented on tables, and do it so that the foreign key can references \n_any_ table ;). \n\nIf we could manage that, we could really call PostgreSQL an OO database.\n\nHannu\n",
"msg_date": "Tue, 21 Jul 1998 18:31:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Complexity of contrib types"
},
{
"msg_contents": "> > Can't they SELECT from pg_proc?\n> \n> Making it a function would probably make the type-addition script\n> easier.\n\nI guess, but much less flexible.\n\n> \n> > >\n> > > The real solution would of course be extending the (Postgre)SQL language\n> > > to find the OIDs automatically, like Oracle currently does for its\n> > > COMMENT\n> > > statement, an equivalent of which could be used in PostgreSQL to insert\n> > > values in pg_description on the fly.\n> > \n> > We return oid's as part of an INSERT. Is that what you meant?\n> \n> It is very hard (probably impossible) to use them from a psql script.\n> \n> If I remember the syntax right (have'nt used Oracle for >=2 years), \n> I could do:\n> \n> COMMENT \"this is a nice table\" ON TABLE nice_table;\n> COMMENT \"this is an unnecessary field from a nice table\"\n> ON FIELD nice_table.unnecessary_field;\n> \n> Of course, to fully support it we would need a much improved foreign \n> key support, so that we could set the ON DELETE CASCADE for the \n> commented on tables, and do it so that the foreign key can references \n> _any_ table ;). \n> \n> If we could manage that, we could really call PostgreSQL an OO database.\n\nWe could create a function that returned the previously inserted oid,\nand use that in the next query.\n\n\tinsert into test values (4);\n\tupdate test3 set val = lastoid();\n\nJust remember the lastoid inserted in the backend code. Seems easy. Do\nyou want it added to the TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 11:47:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Complexity of contrib types"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> \n> We could create a function that returned the previously inserted oid,\n> and use that in the next query.\n> \n> insert into test values (4);\n> update test3 set val = lastoid();\n> \n> Just remember the lastoid inserted in the backend code. Seems easy. Do\n> you want it added to the TODO list.\n\nYes. It could be used in several places.\n\nBut I'm currently not aware about the future of oid's 'at the large'. \n\nI have understood that we are on a way of getting rid of OIDs for \nnon-system tables (and having to re-implement them using triggers \nand sequences where/when needed)?\n\nHannu\n",
"msg_date": "Tue, 21 Jul 1998 19:02:40 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Complexity of contrib types"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > \n> > We could create a function that returned the previously inserted oid,\n> > and use that in the next query.\n> > \n> > insert into test values (4);\n> > update test3 set val = lastoid();\n> > \n> > Just remember the lastoid inserted in the backend code. Seems easy. Do\n> > you want it added to the TODO list.\n> \n> Yes. It could be used in several places.\n> \n> But I'm currently not aware about the future of oid's 'at the large'. \n> \n> I have understood that we are on a way of getting rid of OIDs for \n> non-system tables (and having to re-implement them using triggers \n> and sequences where/when needed)?\n\nOIDs are in SQL-92(?), so we will have to keep them. I believe we may\nsomeday make them optional on user tables.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 12:36:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Complexity of contrib types"
}
] |
[
{
"msg_contents": "> Thomas - get a load of this...I argued with these guys for 2\n> years. And now when PostgreSQL makes it irrelevant, they port.\n\nWell, it will be interesting to see how they do. They can't beat us on\nprice, and don't have a particularly open interface, but it might be fun\nto try to interoperate with them...\n\n> \"In the last couple of weeks, there's been this huge groundswell for\n> Linux,\" the representative said.\n\nTwo weeks??!@??\n\n> A version of Oracle8 for Linux on the Intel platform is planned for\n> shipment by March 1999. Support for other hardware platforms is likely\n> to follow.\n\nIf they hold true to form, they will announce now and take three years\nto actually port the full features from their current product. Oracle,\nat least as of 4 years ago, was the _worst_ company I've seen for\nvaporware and misleading product information. They did a demo at work\n(JPL) for their new product set running on three separate platforms.\nPretty neat, eh? Except that it turned out that _none_ of the products\nrunning on any one of the platforms was available on the other platforms\nin the demo. Forms on a Windows machine, db on a Sun, something else on\na Mac, and no current product available for all three. Pretty slimy\nsalesmanship :(\n\n - Tom\n",
"msg_date": "Tue, 21 Jul 1998 16:33:47 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux Oracle! (fwd)"
},
{
"msg_contents": "On Tue, Jul 21, 1998 at 04:33:47PM +0000, Thomas G. Lockhart wrote:\n> Well, it will be interesting to see how they do. They can't beat us on\n> price, and don't have a particularly open interface, but it might be fun\n> to try to interoperate with them...\n\nSo we have to get an ODBC like database connection going. I love it. IMO we\nshould already start thinking about a solution for this.\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Mon, 27 Jul 1998 16:55:27 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Linux Oracle! (fwd)"
}
] |
[
{
"msg_contents": "\nDoes anyone here use the editor FTE? The author's site's been down for\nover a week (one of those university machines) and I've been unsuccessful\nfinding a mirror. Does someone have a copy I can either ftp or you can\nmail?\n\nThanks in advance,\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Jul 1998 12:41:32 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hey Linux People (OT)"
},
{
"msg_contents": "On Tue, Jul 21, 1998 at 12:41:32PM -0400, Vince Vielhaber wrote:\n> \n> Does anyone here use the editor FTE? The author's site's been down for\n> over a week (one of those university machines) and I've been unsuccessful\n> finding a mirror. Does someone have a copy I can either ftp or you can\n> mail?\n\nIt�s an official Debian package. So you should be able to get the source\nfrom all Debian ftp mirrors.\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Mon, 27 Jul 1998 16:37:40 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hey Linux People (OT)"
}
] |
[
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> wrote:\n> \n> > Thomas - get a load of this...I argued with these guys for 2\n> > years. And now when PostgreSQL makes it irrelevant, they port.\n> \n> Well, it will be interesting to see how they do. They can't beat us on\n> price, and don't have a particularly open interface, but it might be fun\n> to try to interoperate with them...\n> \n> > \"In the last couple of weeks, there's been this huge groundswell for\n> > Linux,\" the representative said.\n> \n> Two weeks??!@??\n> \n\nIt seemed a little funny for me too. But when I started to think about \nit, there really has been some kind of media awareness explosion about \nlinux \"in last couple of weeks\". It seems that there has been at least \none article per day in at least one larger newspaper for a few weeks \nnow ;)\n\nWonder what will they find next: PostgreSQL, Python, ... ? ;)\n\n> If they hold true to form, they will announce now and take three years\n> to actually port the full features from their current product.\n\nWell, they could go Open Source and have it up and running much faster\n:-p\n\nThey could make huge profits following the revenue generation model \noutlined in http://www.denounce.com/deepblue.html.\n\n----------------\nHannu Krosing\n",
"msg_date": "Tue, 21 Jul 1998 22:17:21 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle on Linux"
}
] |
[
{
"msg_contents": "http://www.benchmarkresources.com\n\n** NEW ** We are proud to announce the complete text of Jim Gray's\n\"Benchmark Handbook, Second Edition)\" published by Morgan Kaufmann\nPublishers, Inc. is now on-line. The site includes both HTML and PDF\nformats of the chapters. This practical guide provides the tools to\nevaluate different systems, different software products on a single\nmachine, and different machines (or new releases) within a single\nproduct family--all within the context of modern system applications\n\nHosted by BenchmarkResources.com, a website developed to provide users\nwith information about various benchmarks, white papers on\nbenchmarking, etc. as well as providing links to related sites. This\nsite covers all facits of computer performance, both workstation and\nserver, hardware and software.\n\nhttp://www.benchmarkresources.com\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 21 Jul 1998 19:35:18 GMT",
"msg_from": "[email protected] (Brian Butler)",
"msg_from_op": true,
"msg_subject": "Jim Gray's Benchmark Handbook Online"
}
] |
[
{
"msg_contents": "I'm working on a docs roadmap to help coordinate the transition to SGML\nsources for some of the docs. I did an inventory of the source tree and\nfound around 400 files which have something to do with documentation!\nLots to keep track of, and we might want to think about how to\nconsolidate some more.\n\nAnyway, I'm out of town through the weekend but will work on the roadmap\nnext week.\n\n - Tom\n",
"msg_date": "Wed, 22 Jul 1998 16:01:17 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Documentation Roadmap"
},
{
"msg_contents": "> I'm working on a docs roadmap to help coordinate the transition to SGML\n> sources for some of the docs. I did an inventory of the source tree and\n> found around 400 files which have something to do with documentation!\n> Lots to keep track of, and we might want to think about how to\n> consolidate some more.\n> \n> Anyway, I'm out of town through the weekend but will work on the roadmap\n> next week.\n\nThat is why I suggested we allow the migration to take place as we keep\nthe current non-sgml stuff up-to-date, then merge changes, rather than\ntrying to track each file.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 12:01:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Documentation Roadmap"
}
] |
[
{
"msg_contents": "I have added this to the developers FAQ. It is an on-line SQL\nperformance book.\n\n---------------------------------------------------------------------------\n",
"msg_date": "Wed, 22 Jul 1998 12:57:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance book"
}
] |
[
{
"msg_contents": "How do you unsubscribe from this list?\n\nEric Thompson\nJ. Eric Thompson\[email protected]\n(260)781-6991\n",
"msg_date": "Wed, 22 Jul 1998 21:19:03 +0000",
"msg_from": "\"J. Eric Thompson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unsubscribing"
}
] |
[
{
"msg_contents": "Hi.\nI just noticed something interesting. I don't know if my idea is better or\nif it wasn't implemented because it violates some SQL rule...\n\nsearchengine=> create table test ( test1 int4, test2 int4);\nCREATE\nsearchengine=> create index test_itest1 on test (test1);\nCREATE\n<insert a pile of data so it looks like so>\nsearchengine=> select * from test;\ntest1|test2\n-----+-----\n 1| 3\n 1| 5\n 1| 9\n 2| 1\n 2| 3\n 2| 6\n 2| 9\n 3| 9\n 4| 5\n(9 rows)\n\nNow here is the plan I expect for a single test1 value\nsearchengine=> explain select * from test where test1=1;\nIndex Scan on test (cost=0.00 size=0 width=8)\n\nBut look:\nsearchengine=> explain select * from test where test1=1 or test1=2;\nSeq Scan on test (cost=0.00 size=0 width=8)\n\nugh! Sequential. This may be OK for a small database, but in my\napplication I have many rows:\nsearchengine=> explain select * from word_detail where word_id=23423 or\nword_id=68548;\n\nSeq Scan on word_detail (cost=205938.73 size=510342 width=10)\n\nThat costs a _LOT_.\n\nWouldn't it be better to do n sequential scans where n is the number of\nor'd together values? Using IN doesn't help out either...\n\nsearchengine=> explain select * from test where test1 IN (5,9);\nSeq Scan on test (cost=0.00 size=0 width=8)\n\nSometimes I wish I had the power to tell the DBMS how I wanted a query\ndone...\n\n-Mike\n\n",
"msg_date": "Wed, 22 Jul 1998 20:17:33 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiency again..."
},
{
"msg_contents": "> Now here is the plan I expect for a single test1 value\n> searchengine=> explain select * from test where test1=1;\n> Index Scan on test (cost=0.00 size=0 width=8)\n> \n> But look:\n> searchengine=> explain select * from test where test1=1 or test1=2;\n> Seq Scan on test (cost=0.00 size=0 width=8)\n> \n> ugh! Sequential. This may be OK for a small database, but in my\n> application I have many rows:\n> searchengine=> explain select * from word_detail where word_id=23423 or\n> word_id=68548;\n> \n> Seq Scan on word_detail (cost=205938.73 size=510342 width=10)\n> \n> That costs a _LOT_.\n> \n> Wouldn't it be better to do n sequential scans where n is the number of\n> or'd together values? Using IN doesn't help out either...\n> \n> searchengine=> explain select * from test where test1 IN (5,9);\n> Seq Scan on test (cost=0.00 size=0 width=8)\n> \n> Sometimes I wish I had the power to tell the DBMS how I wanted a query\n> done...\n\nYep, it is on our TODO list, and we have someone trying some fix for\n6.4. It has to do the conjunctive normal form(cnf).\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 22 Jul 1998 20:40:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficiency again..."
}
] |
[
{
"msg_contents": "\nselect max(ccorderseq) from ccorder;\nmax\n---\n603\n(1 row)\n\nicv=> select * From the_view where ccorderseq = 603;\n\n[ I get the row back ]\n\nselect * From the_view where ccorderseq = (select max(ccorderseq) from ccorder);\n\nI don't get the row back. nothing has changed, the max value is still\nthe same. when I write out the view as a select, the subquery works\nand I get the row.\n\nif it makes much of a difference the view is doing a \"glob\" query (of\na parent and all children, i.e. select from table*)\n\n",
"msg_date": "Wed, 22 Jul 1998 19:05:45 -0700 (PDT)",
"msg_from": "Brett McCormick <[email protected]>",
"msg_from_op": true,
"msg_subject": "subselects & views"
},
{
"msg_contents": "I think this is fixed in 6.4. Beta is September 1.\n\n> \n> select max(ccorderseq) from ccorder;\n> max\n> ---\n> 603\n> (1 row)\n> \n> icv=> select * From the_view where ccorderseq = 603;\n> \n> [ I get the row back ]\n> \n> select * From the_view where ccorderseq = (select max(ccorderseq) from ccorder);\n> \n> I don't get the row back. nothing has changed, the max value is still\n> the same. when I write out the view as a select, the subquery works\n> and I get the row.\n> \n> if it makes much of a difference the view is doing a \"glob\" query (of\n> a parent and all children, i.e. select from table*)\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 22 Aug 1998 00:09:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] subselects & views"
}
] |
[
{
"msg_contents": "I need in help:\n\nlet's say you have serialized transaction in one session...\nNow, some other user drops a table that was in data base when \nserialized transaction began but does it before this transaction\nread table (first time)...\n\n1. Will RDBMS allow to drop table? (And so abort\n serialized transaction if it tries read dropped \n table)\n2. Or DROP TABLE will be blocked waiting when\n serialized transaction commits/aborts ?\n\nCould someone test this in Oracle/Informix/Sybase ?\n\nVadim\n",
"msg_date": "Thu, 23 Jul 1998 11:49:26 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "LLL: serialized and schema objects..."
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> I need in help:\n> \n> let's say you have serialized transaction in one session...\n> Now, some other user drops a table that was in data base when\n> serialized transaction began but does it before this transaction\n> read table (first time)...\n> \n> 1. Will RDBMS allow to drop table? (And so abort\n> serialized transaction if it tries read dropped\n> table)\n> 2. Or DROP TABLE will be blocked waiting when\n> serialized transaction commits/aborts ?\n> \n> Could someone test this in Oracle/Informix/Sybase ?\n> \n> Vadim\n\nOracle 8 will drop the table without waiting. The serializable session\nwill not see the table either before or after reading the table, i.e.\npoint 1.\n\nDavid\n",
"msg_date": "Thu, 23 Jul 1998 08:26:41 +0200",
"msg_from": "David Maclean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] LLL: serialized and schema objects..."
},
{
"msg_contents": "David Maclean wrote:\n> >\n> > let's say you have serialized transaction in one session...\n> > Now, some other user drops a table that was in data base when\n> > serialized transaction began but does it before this transaction\n> > read table (first time)...\n> >\n> > 1. Will RDBMS allow to drop table? (And so abort\n> > serialized transaction if it tries read dropped\n> > table)\n> > 2. Or DROP TABLE will be blocked waiting when\n> > serialized transaction commits/aborts ?\n> >\n> > Could someone test this in Oracle/Informix/Sybase ?\n> >\n> > Vadim\n> \n> Oracle 8 will drop the table without waiting. The serializable session\n> will not see the table either before or after reading the table, i.e.\n> point 1.\n\nThanks, David!\nJust for clarification: Oracle allows to drop table even if\ntable was already read by some currently active transaction ?!!!\nHmm, this means that schema objects are not subject\nof transaction isolation..\n\nCould someone comments what standards say???\n\nAnd one more question: will serialized transaction see\njust dropped table when queriyng system catalog ???\nShould we return a row for just dropped table A when\nrun query below in serialized transaction:\n\nselect * from pg_class where relname = 'A';\n\n???\nAre system tables subject of multi-versioning ???\n\nVadim\n",
"msg_date": "Thu, 23 Jul 1998 14:58:30 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] LLL: serialized and schema objects..."
}
] |
[
{
"msg_contents": "I have douwnloaded InterBase 4.0 for Linux from\nhttp://www.interbase.com/download/linux/ and am amazed by its I18N\nsupport. It doesn't have NATIONAL CHARACTER yet. It do support\nCHRACTER SET syntax for CREATE DATABASE/CREATE TABLE/ALTER TABLE etc.,\nhowever. Also, it has COLLATE syntax in WHERE/ORDER BY/GROUP BY. If\nyou were interested in how it does it, you could get PDF manulas from\nsame URL(Lang_Ref.pdf in IB_4.0_docs.tar.gz).\n\nTalking about performance, InterBase is a little bit faster than\nPostgreSQL 6.3.2 when using indexes, but is slower if no index exists.\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Thu, 23 Jul 1998 14:05:46 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "I18N support in InterBase"
},
{
"msg_contents": "> I have douwnloaded InterBase 4.0 for Linux from\n> http://www.interbase.com/download/linux/ and am amazed by its I18N\n> support. It doesn't have NATIONAL CHARACTER yet. It do support\n> CHRACTER SET syntax for CREATE DATABASE/CREATE TABLE/ALTER TABLE etc.,\n> however. Also, it has COLLATE syntax in WHERE/ORDER BY/GROUP BY. If\n> you were interested in how it does it, you could get PDF manulas from\n> same URL(Lang_Ref.pdf in IB_4.0_docs.tar.gz).\n> \n> Talking about performance, InterBase is a little bit faster than\n> PostgreSQL 6.3.2 when using indexes, but is slower if no index exists.\n ^^^^^^\n\nInteresting.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 23 Jul 1998 02:39:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] I18N support in InterBase"
}
] |
[
{
"msg_contents": "I've successfully ported PostgreSQL to HPUX 9.0.* but there is a strange\nbehaviour with the datetime data type.\n\nIf do do this sequence :\n\n$ createdb mydb\n[OK]\n\n$ psqk mydb\n\nmydb==> create table foo (ffoo datetime);\n[OK]\n\nmydb==> insert into foo values ('01/01/1998');\n[OK]\n\nmydb==> select ffoo from foo;\n\nThe rusult is a totally wrong date with year 2140.\n\nI've tried to set datestyle but the results don't change.\n\nHas anybody a hint for me ?\n\n----\nDavide Libenzi at :\nMaticad s.r.l.\nVia Della Giustizia n.9 Fano (PS) 61032 Italy\nTel.: +39-721-808308 (ra) Fax: +39-721-808309\nEmail: <[email protected]>\nWWW: <http://www.maticad.it>\n\n\n",
"msg_date": "Thu, 23 Jul 1998 14:42:59 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": true,
"msg_subject": "datetime ?#!!??@"
},
{
"msg_contents": "[email protected] (Davide Libenzi) writes:\n> I've successfully ported PostgreSQL to HPUX 9.0.* but there is a strange\n> behaviour with the datetime data type.\n> mydb==> create table foo (ffoo datetime);\n> mydb==> insert into foo values ('01/01/1998');\n> mydb==> select ffoo from foo;\n> The rusult is a totally wrong date with year 2140.\n\nIt works fine for me on HPUX 9.03:\n\nplay=> create table foo (ffoo datetime);\nCREATE\nplay=> insert into foo values ('01/01/1998');\nINSERT 105801 1\nplay=> select ffoo from foo;\nffoo\n----------------------------\nThu Jan 01 00:00:00 1998 EST\n(1 row)\n\nHmm, there are a bunch of uses of rint() in adt/dt.c. I'll bet\nyour problem is that you're using the broken version of rint()\nthat's in HP's older releases of /lib/pa1.1/libm.a. Have you\ninstalled patch PHSS_4630?\n\nYou may care to consult my message \"Porting notes and patches for HP-UX\n9.* and 10.*\" in the pgsql-patches archives for 21 Apr 1998. This\nstuff has been taken care of in the current development sources,\nbut if you are trying to use the 6.3.2 release you need to apply\nthe fixes yourself.\n\nBTW, hackers, I intend to submit additional text for the INSTALL\ndirections document that warns people to get PHSS_4630 if they're\nstill on HPUX 9 ... if we can confirm that the primary symptom is\nsilly datetime results, that'll be a good thing to note in INSTALL.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Jul 1998 10:15:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] datetime ?#!!??@ "
},
{
"msg_contents": "> You may care to consult my message \"Porting notes and patches for HP-UX\n> 9.* and 10.*\" in the pgsql-patches archives for 21 Apr 1998. This\n> stuff has been taken care of in the current development sources,\n> but if you are trying to use the 6.3.2 release you need to apply\n> the fixes yourself.\n> \n> BTW, hackers, I intend to submit additional text for the INSTALL\n> directions document that warns people to get PHSS_4630 if they're\n> still on HPUX 9 ... if we can confirm that the primary symptom is\n> silly datetime results, that'll be a good thing to note in INSTALL.\n\nWe need an HPUX-specific FAQ. Period. We have needed it for a long\ntime, between HPUX 9.* and 10.*, and those stars are significant.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 23 Jul 1998 10:35:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] datetime ?#!!??@"
}
] |
[
{
"msg_contents": "Davide Libenzi wrote:\n> \n> After a lot of changes I've compiled,linked and tested (regression) my\n> PostgreSQL installation no HPUX 9.*.\n> \n> I've also built and installed the ODBC driver and I get Ms Access error\n> which the PostgresSQL server log in \"palloc failure : memory exausted\".\n> \n> Is this a server bug or ODBC driver bug ?\n> \n\nI am assuming you have a fairly new odbc driver (6.30.0248 is the\nlatest) and not the old postodbc. BTW, on our website\n(www.insightdist.com/psqlodbc) we have the DLL and a full install EXE\nfor win32 so you wouldn't have to build it yourself from the source code\nif you didn't want to.\n\nThe palloc failure usually occurs because Access uses the multiple OR\nquery (select ... where a=1 OR a=2 OR a=3...) to access the recordset.\nThe backend does not handle this very well and it is already well known\non the TODO list.\n\nThere are several possibilities to get past this:\n1. Use a non-updateable table (by setting the driver readonly option, or\nby not specifying any unique identifiers).\n2. For a query, use a snapshot recordset in the query properties.\n3. Show the OID column in the drivers advanced datasource options and\nuse that alone to index on. You should create an index on it too. This\nis still slow, but at least shouldn't crash.\n\nOther possibilities:\n\nIn house, Dave made a patch to postgres which rewrites the multiple OR\nquery into a UNION query, which works great and its fast! We may make\nthis patch available evntually on our website.\n\nByron\n",
"msg_date": "Thu, 23 Jul 1998 13:30:42 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ODBC and palloc ..."
},
{
"msg_contents": "After a lot of changes I've compiled,linked and tested (regression) my\nPostgreSQL installation no HPUX 9.*.\n\nI've also built and installed the ODBC driver and I get Ms Access error\nwhich the PostgresSQL server log in \"palloc failure : memory exausted\".\n\nIs this a server bug or ODBC driver bug ?\n\n----\nDavide Libenzi at :\nMaticad s.r.l.\nVia Della Giustizia n.9 Fano (PS) 61032 Italy\nTel.: +39-721-808308 (ra) Fax: +39-721-808309\nEmail: <[email protected]>\nWWW: <http://www.maticad.it>\n\n\n",
"msg_date": "Thu, 23 Jul 1998 19:00:09 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": false,
"msg_subject": "ODBC and palloc ..."
},
{
"msg_contents": "> Davide Libenzi wrote:\n> > \n> > After a lot of changes I've compiled,linked and tested (regression) my\n> > PostgreSQL installation no HPUX 9.*.\n> > \n> > I've also built and installed the ODBC driver and I get Ms Access error\n> > which the PostgresSQL server log in \"palloc failure : memory exausted\".\n> > \n> > Is this a server bug or ODBC driver bug ?\n> > \n> \n> I am assuming you have a fairly new odbc driver (6.30.0248 is the\n> latest) and not the old postodbc. BTW, on our website\n> (www.insightdist.com/psqlodbc) we have the DLL and a full install EXE\n> for win32 so you wouldn't have to build it yourself from the source code\n> if you didn't want to.\n> \n> The palloc failure usually occurs because Access uses the multiple OR\n> query (select ... where a=1 OR a=2 OR a=3...) to access the recordset.\n> The backend does not handle this very well and it is already well known\n> on the TODO list.\n> \n> There are several possibilities to get past this:\n> 1. Use a non-updateable table (by setting the driver readonly option, or\n> by not specifying any unique identifiers).\n> 2. For a query, use a snapshot recordset in the query properties.\n> 3. Show the OID column in the drivers advanced datasource options and\n> use that alone to index on. You should create an index on it too. This\n> is still slow, but at least shouldn't crash.\n> \n> Other possibilities:\n> \n> In house, Dave made a patch to postgres which rewrites the multiple OR\n> query into a UNION query, which works great and its fast! We may make\n> this patch available evntually on our website.\n\nI am thinking about this right now, and will post an analysis within the\nnext day.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Thu, 23 Jul 1998 15:45:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ODBC and palloc ..."
}
] |
[
{
"msg_contents": "Pathc PHSS_4630 applied !\n\nEverything OK\n\n----\nDavide Libenzi at :\nMaticad s.r.l.\nVia Della Giustizia n.9 Fano (PS) 61032 Italy\nTel.: +39-721-808308 (ra) Fax: +39-721-808309\nEmail: <[email protected]>\nWWW: <http://www.maticad.it>\n\n-----Original Message-----\nFrom: Davide Libenzi <[email protected]>\nTo: [email protected] <[email protected]>;\[email protected] <[email protected]>\nDate: Thursday, July 23, 1998 1:55 PM\nSubject: [GENERAL] datetime ?#!!??@\n\n\n>I've successfully ported PostgreSQL to HPUX 9.0.* but there is a strange\n>behaviour with the datetime data type.\n>\n>If do do this sequence :\n>\n>$ createdb mydb\n>[OK]\n>\n>$ psqk mydb\n>\n>mydb==> create table foo (ffoo datetime);\n>[OK]\n>\n>mydb==> insert into foo values ('01/01/1998');\n>[OK]\n>\n>mydb==> select ffoo from foo;\n>\n>The rusult is a totally wrong date with year 2140.\n>\n>I've tried to set datestyle but the results don't change.\n>\n>Has anybody a hint for me ?\n>\n>----\n>Davide Libenzi at :\n>Maticad s.r.l.\n>Via Della Giustizia n.9 Fano (PS) 61032 Italy\n>Tel.: +39-721-808308 (ra) Fax: +39-721-808309\n>Email: <[email protected]>\n>WWW: <http://www.maticad.it>\n>\n>\n>\n\n",
"msg_date": "Thu, 23 Jul 1998 18:53:10 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] datetime ?#!!??@"
}
] |
[
{
"msg_contents": "Davide Libenzi wrote:\n> \n> I think this is not my case.\n> See attachment log for details.\n> \n\n>From looking at the log that is *exactly* your case. I pulled the\noffending query out and cleaned it up a bit.\n\nYou have a two-part key (padre & figlio) and you can see the multiple\nOR's between them. The MS Jet db engine typically uses a rowset size of\n10 (so you see 10 keys below) and a keyset size of a couple of hundred\nor so. In other words, it first read in 200 keys (the \"keyset\") and\nthen uses these keys to access a \"rowset\" of size 10 out of the entire\n\"resultset\" (how ever many records you have total). This is called a\nMixed (Keyset/Dynamic) cursor or a \"Dynaset\". Like I said in my last\nemail, if you change the datasource to be read-only, then re-link your\ntable in Access, it will not use this style of retrieval and you should\nget some results. OR, you can try the other options I mentioned.\n\nSELECT \"padre\",\"figlio\",\"qta\" FROM \"distinta\" \nWHERE \"padre\" = 'PPPA' AND \"figlio\" = 'AAA' \nOR \"padre\" = 'KKKL' AND \"figlio\" = 'LLLA'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD' \nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nOR \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\n\n\n\nThe only problem with this style of retrieving records is that the\nPostgres backend can not handle it. It results in exponential memory\nusage as it tries to optimize it. You could type in the above query by\nhand to the monitor and see the same result.\n\nThen for fun try rewriting the query to use UNIONS instead of OR's and\nyou will see how fast it is (assuming you have an index). See below.\n\nSELECT \"padre\",\"figlio\",\"qta\" FROM \"distinta\" \nWHERE \"padre\" = 'PPPA' AND \"figlio\" = 'AAA' \nUNION\nSELECT \"padre\",\"figlio\",\"qta\" FROM \"distinta\" \nWHERE \"padre\" = 'KKKL' AND \"figlio\" = 'LLLA'\nUNION\nSELECT \"padre\",\"figlio\",\"qta\" FROM \"distinta\" \nWHERE \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\nUNION\nSELECT \"padre\",\"figlio\",\"qta\" FROM \"distinta\" \nWHERE \"padre\" = 'AAAAA' AND \"figlio\" = 'ASDWDWD'\n....\n\n\n\nByron\n",
"msg_date": "Thu, 23 Jul 1998 14:50:03 -0400",
"msg_from": "Byron Nikolaidis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ODBC and palloc ..."
},
{
"msg_contents": "I think this is not my case.\nSee attachment log for details.\n\nHi\n\n----\nDavide Libenzi at :\nMaticad s.r.l.\nVia Della Giustizia n.9 Fano (PS) 61032 Italy\nTel.: +39-721-808308 (ra) Fax: +39-721-808309\nEmail: <[email protected]>\nWWW: <http://www.maticad.it>\n\n-----Original Message-----\nFrom: Byron Nikolaidis <[email protected]>\nTo: Davide Libenzi <[email protected]>\nCc: [email protected] <[email protected]>;\[email protected] <[email protected]>; David\nHartwig <[email protected]>\nDate: Thursday, July 23, 1998 6:59 PM\nSubject: Re: [HACKERS] ODBC and palloc ...\n\n\n>Davide Libenzi wrote:\n>>\n>> After a lot of changes I've compiled,linked and tested (regression) my\n>> PostgreSQL installation no HPUX 9.*.\n>>\n>> I've also built and installed the ODBC driver and I get Ms Access error\n>> which the PostgresSQL server log in \"palloc failure : memory exausted\".\n>>\n>> Is this a server bug or ODBC driver bug ?\n>>\n>\n>I am assuming you have a fairly new odbc driver (6.30.0248 is the\n>latest) and not the old postodbc. BTW, on our website\n>(www.insightdist.com/psqlodbc) we have the DLL and a full install EXE\n>for win32 so you wouldn't have to build it yourself from the source code\n>if you didn't want to.\n>\n>The palloc failure usually occurs because Access uses the multiple OR\n>query (select ... where a=1 OR a=2 OR a=3...) to access the recordset.\n>The backend does not handle this very well and it is already well known\n>on the TODO list.\n>\n>There are several possibilities to get past this:\n>1. Use a non-updateable table (by setting the driver readonly option, or\n>by not specifying any unique identifiers).\n>2. For a query, use a snapshot recordset in the query properties.\n>3. Show the OID column in the drivers advanced datasource options and\n>use that alone to index on. You should create an index on it too. This\n>is still slow, but at least shouldn't crash.\n>\n>Other possibilities:\n>\n>In house, Dave made a patch to postgres which rewrites the multiple OR\n>query into a UNION query, which works great and its fast! We may make\n>this patch available evntually on our website.\n>\n>Byron\n>",
"msg_date": "Thu, 23 Jul 1998 20:09:21 +0100",
"msg_from": "[email protected] (Davide Libenzi)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ODBC and palloc ..."
}
] |
[
{
"msg_contents": "\nIs something broken in the \"between\" command? I have 6.3 running on the\nproduction machine and the following sql statement works fine:\n\nselect city from camps3 where lat between 43.833298 and 44.233298;\n\nOn Jul 20, I cvsup'd the current version and have it running and the same\ncall results in:\n\nPQresultStatus: 7\nPQerrorMessage: pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n\n\nWas something broke? Is it now fixed?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 23 Jul 1998 15:02:04 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Between broken?"
}
] |
[
{
"msg_contents": "How (can?) you create pointers in a database?\nBy this I mean can you put a filename, ftp address, URL, etc that the\ndatabase would nterperet.\n\n-Greg\n\n",
"msg_date": "Thu, 23 Jul 1998 16:29:22 -0400",
"msg_from": "Gregory Holston <[email protected]>",
"msg_from_op": true,
"msg_subject": "pointers"
}
] |
[
{
"msg_contents": "\nDunno why, but I rebooted and the between clause now works. Go figure..\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 23 Jul 1998 20:44:47 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Between -- update"
}
] |
[
{
"msg_contents": "At 10:56 PM 7/23/98, The Hermit Hacker wrote:\n>On Thu, 23 Jul 1998, Maarten Boekhold wrote:\n\n>> In fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\n>> createdb and destroydb tools just call these SQL statements....\n>\n> Here's an odd thought:\n>\n> Let's remove the \"I don't want to think\" utilities like\n>{create,destroy}{db,user} and force DBA's to actually use the *proper*\n>functions.\n\n:-)\n\nActually...\n\nWhile the man pages indicate that these invoke psql, and that a postmaster\nmust be running, and somebody really smart could infer that that means that\nthere is SQL to do the action, it would be much, much better if the man\npages explicitly stated that it was merely a shortcut to using the sql.\n\n--\n--\n-- \"TANSTAAFL\" Rich [email protected]\n\n\n",
"msg_date": "Thu, 23 Jul 1998 21:29:23 -0500",
"msg_from": "[email protected] (Richard Lynch)",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> >> In fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\n> >> createdb and destroydb tools just call these SQL statements....\n\n> > Let's remove the \"I don't want to think\" utilities like\n> >{create,destroy}{db,user} and force DBA's to actually use the *proper*\n> >functions.\n\n> While the man pages indicate that these invoke psql, and that a postmaster\n> must be running, and somebody really smart could infer that that means that\n> there is SQL to do the action, it would be much, much better if the man\n> pages explicitly stated that it was merely a shortcut to using the sql.\n\nI think only doing it the SQL way would be fine. Documentation would, of\ncourse, have to cover it. I want, no need, to know what functionality\nbelongs to SQL and what belongs to PostgreSQL. I've certainly not got any\nqualms about dropping into psql to do things. I like psql.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n",
"msg_date": "Fri, 24 Jul 1998 08:42:13 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> > >> In fact, they are handled by SQL: CREATE DATABASE and DROP DATABASE. The\n> > >> createdb and destroydb tools just call these SQL statements....\n> \n> > > Let's remove the \"I don't want to think\" utilities like\n> > >{create,destroy}{db,user} and force DBA's to actually use the *proper*\n> > >functions.\n> \n> > While the man pages indicate that these invoke psql, and that a postmaster\n> > must be running, and somebody really smart could infer that that means that\n> > there is SQL to do the action, it would be much, much better if the man\n> > pages explicitly stated that it was merely a shortcut to using the sql.\n> \n> I think only doing it the SQL way would be fine. Documentation would, of\n> course, have to cover it. I want, no need, to know what functionality\n> belongs to SQL and what belongs to PostgreSQL. I've certainly not got any\n> qualms about dropping into psql to do things. I like psql.\n\nThey have to connect to template1 to do the work. Currently, they don't\nneed to know template1 even exists, so it seems like an added burden. I\nwill add a mention to the createdb, destroydb man pages. createuser\ndoes psql too.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 12:05:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]"
},
{
"msg_contents": "> They have to connect to template1 to do the work. Currently, they don't\n> need to know template1 even exists, so it seems like an added burden. I\n> will add a mention to the createdb, destroydb man pages. createuser\n> does psql too.\n\nAnd as a result, I didn't know what template1 was for until now, and I\nfear there's more to it than just this. Up until this point, I assumed\n\"template1\" was an example database which could be copied, or something.\nAt least that's what a template is to me.\n\nOkay, I've suspected there was more to \"template1\" for a little while now,\nbut I'd not gotten around to looking into it more. Still, my first\nimpression was it was a sample database. ;) Maybe a name like \"master\"\nwould be clearer, or maybe that means something else to someone.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n",
"msg_date": "Fri, 24 Jul 1998 12:29:42 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]"
},
{
"msg_contents": "> Okay, I've suspected there was more to \"template1\" for a little while now,\n> but I'd not gotten around to looking into it more. Still, my first\n> impression was it was a sample database. ;) Maybe a name like \"master\"\n> would be clearer, or maybe that means something else to someone.\n> \n\nYes, master would be a better name than template1.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 12:32:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]y"
},
{
"msg_contents": "On Fri, 24 Jul 1998, Bruce Tong wrote:\n\n> > They have to connect to template1 to do the work. Currently, they don't\n> > need to know template1 even exists, so it seems like an added burden. I\n> > will add a mention to the createdb, destroydb man pages. createuser\n> > does psql too.\n> \n> And as a result, I didn't know what template1 was for until now, and I\n> fear there's more to it than just this. Up until this point, I assumed\n> \"template1\" was an example database which could be copied, or something.\n> At least that's what a template is to me.\n\n\tIn a sense, that is exactly what it is. When you do a 'createdb',\nit uses template1 as the \"template\" for the new database, and then buildds\nfrom there...\n\n\n",
"msg_date": "Fri, 24 Jul 1998 12:34:27 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]"
},
{
"msg_contents": "\n> I think only doing it the SQL way would be fine. Documentation would, of\n> course, have to cover it\n\nThat last sentence says it all...\"Documentations would, of course, have to cofver\nit.\" The reason I used createdb to generate the my database, is that is what the\nman page said to do. Unfortunately it is hard to search in the man pages for\nsomething like: How do I create a database?. I think I came accross the\ncreatedb man page via the postgres man page's see also section...james\n\n",
"msg_date": "Fri, 24 Jul 1998 14:27:43 -0400",
"msg_from": "James Olin Oden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> \n> > Okay, I've suspected there was more to \"template1\" for a little while now,\n> > but I'd not gotten around to looking into it more. Still, my first\n> > impression was it was a sample database. ;) Maybe a name like \"master\"\n> > would be clearer, or maybe that means something else to someone.\n> > \n> \n> Yes, master would be a better name than template1.\n\n\nI disagree. template1 is not only the \"master\" database, it is also used as\nas the template when creating a new database. That is, if you want to create\nall your databases with certain characteristics (say installed functions or\ntypes or something) you can set up template1 the way you want, dump it to\nthe template dump file and then any new db will be created with your\ncustomization. So template1 really is a discriptive term.\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Fri, 24 Jul 1998 12:09:12 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]y"
},
{
"msg_contents": "> > \n> > > Okay, I've suspected there was more to \"template1\" for a little while now,\n> > > but I'd not gotten around to looking into it more. Still, my first\n> > > impression was it was a sample database. ;) Maybe a name like \"master\"\n> > > would be clearer, or maybe that means something else to someone.\n> > > \n> > \n> > Yes, master would be a better name than template1.\n> \n> \n> I disagree. template1 is not only the \"master\" database, it is also used as\n> as the template when creating a new database. That is, if you want to create\n> all your databases with certain characteristics (say installed functions or\n> types or something) you can set up template1 the way you want, dump it to\n> the template dump file and then any new db will be created with your\n> customization. So template1 really is a discriptive term.\n\nGood point.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 15:13:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]y"
},
{
"msg_contents": "> > > Okay, I've suspected there was more to \"template1\" for a little while now,\n> > > but I'd not gotten around to looking into it more. Still, my first\n> > > impression was it was a sample database. ;) Maybe a name like \"master\"\n> > > would be clearer, or maybe that means something else to someone.\n> > \n> > Yes, master would be a better name than template1.\n> \n> I disagree. template1 is not only the \"master\" database, it is also used as\n> as the template when creating a new database. That is, if you want to create\n> all your databases with certain characteristics (say installed functions or\n> types or something) you can set up template1 the way you want, dump it to\n> the template dump file and then any new db will be created with your\n> customization. So template1 really is a discriptive term.\n\nReally? Neat. Okay, its descriptive. Did I miss this in the docs or forget\nit? I think this should be mentioned at an appropriate place in the\nbeginning of the docs. Such as when discussing how to create a database.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n",
"msg_date": "Fri, 24 Jul 1998 16:22:08 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and\n\tInformix on Linux]y"
},
{
"msg_contents": "> > > Let's remove the \"I don't want to think\" utilities like\n> > >{create,destroy}{db,user} and force DBA's to actually use the *proper*\n> > >functions.\n\nIMHO (actually make that IMVeryHO) This is probably a bad idea... We\nshould just update the man pages to detail the SQL code that can be used\ninstead of the command. It doesn't hurt anything/anyone to leave the\nprograms as they are, and can even be helpful to people writing scripts to\nautomate management of their servers.\n\nChris\n\n\n",
"msg_date": "Mon, 27 Jul 1998 17:06:25 -0400 (EDT)",
"msg_from": "Chris Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> > > > Let's remove the \"I don't want to think\" utilities like\n> > > >{create,destroy}{db,user} and force DBA's to actually use the *proper*\n> > > >functions.\n> \n> IMHO (actually make that IMVeryHO) This is probably a bad idea... We\n> should just update the man pages to detail the SQL code that can be used\n> instead of the command. It doesn't hurt anything/anyone to leave the\n> programs as they are...\n\nTrue.\n\n> ... and can even be helpful to people writing scripts to\n> automate management of their servers.\n\nThis is already possible with psql. Most of my psql work is done via a\nmakefile, infact. I tend to put my SQL into a file such as create.sql and\ndestroy.sql then my project makefile can handle the rest just by having\npsql read those files when needed. Sure, I do that for development, but it\nwould work for maintenance.\n\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n",
"msg_date": "Mon, 27 Jul 1998 17:30:16 -0400 (EDT)",
"msg_from": "Bruce Tong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
},
{
"msg_contents": "> > > > Let's remove the \"I don't want to think\" utilities like\n> > > >{create,destroy}{db,user} and force DBA's to actually use the *proper*\n> > > >functions.\n> \n> IMHO (actually make that IMVeryHO) This is probably a bad idea... We\n> should just update the man pages to detail the SQL code that can be used\n> instead of the command. It doesn't hurt anything/anyone to leave the\n> programs as they are, and can even be helpful to people writing scripts to\n> automate management of their servers.\n\nman pages already updated.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Tue, 28 Jul 1998 03:11:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] [Fwd: SGVLLUG Oracle and Informix on\n\tLinux]"
}
] |
[
{
"msg_contents": "Could anybody tell me how many tables/colums are allowed?\n\n1. maximum number of tables in a database\n\nI guess only the limitiation is OID for each table. So it's up to 2^31 \nin theory?\n\n2. maximum number of indexes in a database\n\nditto.\n\n3. maximum number of tuples in a table\n\nditto.\n\n4. maximum number of columns in a table\n\nsince max tuple size is ~8K, max number of columns would be:\n\n8K/(least size data type (int2?)) = 4000\n\nor any other limitaion?\n\n5. maximum number of indexes in a table\n\nat most 4 above. or any other limitation?\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Fri, 24 Jul 1998 11:56:20 +0900",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "limitaion of PostgreSQL"
}
] |
[
{
"msg_contents": "\n\tHey, I never saw a follow-up on any of the BLOB questions, so I thought\nI'd throw them in here again:\n\n\tIs there a way to actually *delete* a BLOB?\n\n\tIs anyone working on BLOBs?\n\n\tLast time I tried to use it, it seemed to store each BLOB in a file by\nitself (two files?) in the middle of my data directory, which seems like a very\nbad thing IMO. I also couldn't find any documented way of removing them, and\nsimply deleting the file(s) caused my vacuum to fail. I'd really like to use\nBLOBs instead of nasty MIME encoding of large images, but I'd definitely need\nto be able to delete, and it'd be nice if I wouldn't fill up that one directory\nwith them. If nobody is working on BLOBs, it might be fun to find a way to\nimplement another storage mechanism, possibly a single file (group of files) to\nstore the BLOB, or a directory hierarchy.\n\n\tTIA.\n\n--\nSA, software.net My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE \nL_______________________ I hope the answer won't upset her. ____________\n\n",
"msg_date": "Thu, 23 Jul 1998 21:55:54 -0700",
"msg_from": "Dustin Sallings <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLOBs"
},
{
"msg_contents": "[moved this to the hackers list]\n\nOn Thu, 23 Jul 1998, Dustin Sallings wrote:\n\n> \n> \tHey, I never saw a follow-up on any of the BLOB questions, so I thought\n> I'd throw them in here again:\n> \n> \tIs there a way to actually *delete* a BLOB?\n\nthe lo_unlink() function will delete the blob given it's oid.\n\n> \tIs anyone working on BLOBs?\n\nI'm working on the blob orphaning problem that exists for JDBC & ODBC.\n\n> \tLast time I tried to use it, it seemed to store each BLOB in a file by\n> itself (two files?) in the middle of my data directory, which seems like a very\n> bad thing IMO.\n\nThe current scheme actually creates a table and index pair for each blob,\nwhich is what you are seeing.\n\nThere was talk of having another storage manager which stores all of them\nin a single file, but nothing happened with it.\n\n> I also couldn't find any documented way of removing them, and\n> simply deleting the file(s) caused my vacuum to fail. I'd really like to use\n> BLOBs instead of nasty MIME encoding of large images, but I'd definitely need\n> to be able to delete, and it'd be nice if I wouldn't fill up that one directory\n> with them. If nobody is working on BLOBs, it might be fun to find a way to\n> implement another storage mechanism, possibly a single file (group of files) to\n> store the BLOB, or a directory hierarchy.\n\nTake a look at the ImageViewer example in the src/interfaces/jdbc/examples\ndirectory. Ok, it's in java, but it does show how to store, retrieve and\ndelete images from a database.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Sat, 25 Jul 1998 12:38:41 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] BLOBs"
}
] |
[
{
"msg_contents": " Just thought I'd chip in my two cents on all of this. Pardon me if\nI'm not totally coherent... I've only had a few hours of sleep in the\nlast few days.\n It seems to me that the best solution is a unified CIDR type (using\nthe proper terminology) which can represent hosts, networks, and/or\nnetmasks in a variety of address families, notably IPv4 and IPv6.\nPerhaps I don't fully understand what capabilities we have with\natttypmod, but could we use it to \"subtype\" the CIDR type? Then\nvarious operations, including input/output, queries, and indexing,\ncould act accordingly.\n\n-Brandon :)\n",
"msg_date": "Fri, 24 Jul 1998 00:38:03 -0500 (CDT)",
"msg_from": "Brandon Ibach <[email protected]>",
"msg_from_op": true,
"msg_subject": "CIDR type"
}
] |
[
{
"msg_contents": "I made an index on the OID field of a table, but I find that the system\nis still pretty picky about whether it will use the index or not.\n\ntgl=> explain select * from history where oid = 34311;\nNOTICE: QUERY PLAN:\nSeq Scan on history (cost=25.66 size=1 width=100)\ntgl=> explain update history set simstatus = '-' where oid = 34311;\nNOTICE: QUERY PLAN:\nSeq Scan on history (cost=25.66 size=1 width=94)\n\nOh dear, why isn't it using the index? By chance I tried this:\n\ntgl=> explain select * from history where oid = 34311::oid;\nNOTICE: QUERY PLAN:\nIndex Scan using history_oid_index on history (cost=21.92 size=179 width=100)\ntgl=> explain update history set simstatus = '-' where oid = 34311::oid;\nNOTICE: QUERY PLAN:\nIndex Scan using history_oid_index on marketorderhistory\n(cost=2.05 size=1 width=94)\n\nMuch better. But why do I need to cast the constant to OID explicitly\nto get a reasonable query plan? The system obviously knows that it has to\ncast the thing to OID at some point... I'd have expected that to happen\nbefore the query optimizer runs.\n\nThis is with recent development sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Jul 1998 15:00:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan affected by lack of explicit cast?"
}
] |
[
{
"msg_contents": "At 4:26 PM 7/24/98, Bruce Tong wrote:\n\n>Since I'm learning SQL in my spare time, I tend to use these feature in MS\n>Access and PgAccess to point me in the right direction or sometimes\n>confirm, or deny my assertions.\n\nAbsolutely. For new (to me) SQL, I point and click at what I think MS\nwould want me to point and click at, and then look at the SQL and rip out\nthe crap I know they got wrong, and then run it to see what they got wrong\nin the new stuff, and then mess around with bits and pieces of what they\ncame up with until it's right.\n\n>I like psql, but its not the kind of tool which suggests other\n>alternatives. It just says \"this part is bogus.\" That's fine, but when I\n>fail to get it right after a dozen attempts, its nice to let something\n>else take a stab at it.\n\nI especially hate that it doesn't even say \"this part is bogus\" sometimes.\nIt says \"error near 'from'\". *WHICH* from. I could easily have 3 or 4 in\neven the simplest query. And god forbid it says \"error near ','\". How\nuseless is that?!\n\nOkay, enough bitching, I'll be positive:\npsql would be infinitely nicer if it dumped out the query and indicated the\nEXACT location of the problem.\n\n--\n--\n-- \"TANSTAAFL\" Rich [email protected]\n\n\n",
"msg_date": "Fri, 24 Jul 1998 17:52:43 -0500",
"msg_from": "[email protected] (Richard Lynch)",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [INTERFACES] ODBC Driver -- Access Order By problem\n\tsolved!!!"
},
{
"msg_contents": "> I especially hate that it doesn't even say \"this part is bogus\" sometimes.\n> It says \"error near 'from'\". *WHICH* from. I could easily have 3 or 4 in\n> even the simplest query. And god forbid it says \"error near ','\". How\n> useless is that?!\n> \n> Okay, enough bitching, I'll be positive:\n> psql would be infinitely nicer if it dumped out the query and indicated the\n> EXACT location of the problem.\n> \n\nNew 6.4 psql help will show:\n\t\n\ttest=> \\h select\n\tCommand: select\n\tDescription: retrieve tuples\n\tSyntax:\n\tSELECT [DISTINCT [ON attrN]] expr1 [AS attr1], ...exprN\n\t [INTO [TABLE] class_name]\n\t [FROM from_list]\n\t [WHERE qual]\n\t [GROUP BY group_list]\n\t [HAVING having_clause]\n\t [ORDER BY attr1 [ASC|DESC] [USING op1], ...attrN ]\n\t [UNION [ALL] SELECT ...];\n\t\n\nRemoved <> around user-supplied values, and uppercase the reserved words\nto make things clear. I don't think there is a need to do this on the\nmanual pages because we have bolding. Comments?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 20:18:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [INTERFACES] ODBC Driver -- Access Order By problem\n\tsolved!!!"
},
{
"msg_contents": "\nNow it is:\n\n> New 6.4 psql help will show:\n> \t\n> \ttest=> \\h select\n> \tCommand: select\n> \tDescription: retrieve tuples\n> \tSyntax:\n> \t\tSELECT [DISTINCT [ON attrN]] expr1 [AS attr1], ...exprN\n> \t [INTO [TABLE] class_name]\n> \t [FROM from_list]\n> \t [WHERE qual]\n> \t [GROUP BY group_list]\n> \t [HAVING having_clause]\n> \t [ORDER BY attr1 [ASC|DESC] [USING op1], ...attrN ]\n> \t [UNION [ALL] SELECT ...];\n> \t\n> \n> Removed <> around user-supplied values, and uppercase the reserved words\n> to make things clear. I don't think there is a need to do this on the\n> manual pages because we have bolding. Comments?\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 21:00:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] Re: [INTERFACES] ODBC Driver -- Access\n\tOrder By problem solved!!!"
}
] |
[
{
"msg_contents": "> >> >Removed <> around user-supplied values, and uppercase the reserved words\n> >> >to make things clear. I don't think there is a need to do this on the\n> >> >manual pages because we have bolding. Comments?\n> >>\n> >> You the man! :-)\n> >>\n> >\n> >That is funny. Comments on the man page/uppercase issue?\n> \n> 100% better. That's what I was trying to say.\n\nShould I change the manual pages to keywork uppercase too, or just psql\nhelp?\n\n> \n> The only thing I still don't get is why distinct seems to be a one or all,\n> but nothing in between, sort of deal, at least if that syntax is\n> correct...?\n\nOne or all. Not sure why that is a requirement.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 24 Jul 1998 23:54:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [INTERFACES] ODBC Driver -- Access Order By problem\n\tsolved!!!"
}
] |
[
{
"msg_contents": "> >Should I change the manual pages to keywork uppercase too, or just psql\n> >help?\n> \n> Oh sorry. The bold etc on the web is great. ALL CAPS is only for\n> unformatted text, imho...\n\nI agree. That is my personal opinion too.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 25 Jul 1998 00:05:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [INTERFACES] ODBC Driver -- Access Order By problem\n\tsolved!!!"
}
] |
[
{
"msg_contents": "Hi,\nit's easy to fix the typo in backend/parser/gram.y on line 2031 (:->;).\nNext problem is pg_wchar.h. This header file moved from /include/regex\nto /include/mb. A 'make depend' doesn't help very much, so I copied it\nback to the regex subdirectory.\n\n-Egon\n",
"msg_date": "Sat, 25 Jul 1998 14:04:24 +0200",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compile of recent CVS fails"
},
{
"msg_contents": "At 2:04 PM 98.7.25 +0200, Egon Schmid wrote:\n>Next problem is pg_wchar.h. This header file moved from /include/regex\n>to /include/mb. A 'make depend' doesn't help very much, so I copied it\n>back to the regex subdirectory.\n\nSorry. this is due to patches I recently sent. Some of them had been \nrejected because of the changes since I grabbed the snapshot.\nI have already sent addtional patches, and the problem should be\nsolved soon.\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Sat, 25 Jul 1998 23:30:57 +0900",
"msg_from": "[email protected] (Tatsuo Ishii)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile of recent CVS fails"
}
] |
[
{
"msg_contents": "Hi,\n\ncompilation of the current snapshot \n\n 4137013 Jul 25 07:02 postgresql.snapshot.tar.gz\n\ndies on linux with the following error:\n\nmake[2]: Entering directory `/usr/local/postgresql-6.4beta/src/backend/parser'\n/usr/bin/bison -y -d gram.y\nconflicts: 1 shift/reduce\nmv y.tab.c gram.c\nmv y.tab.h parse.h\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -Wno-error -c analyze.c -o analyze.o\ngcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -Wno-error -c gram.c -o gram.o\nbison.simple: In function `yyparse':\nbison.simple:327: warning: implicit declaration of function `yyerror'\nbison.simple:387: warning: implicit declaration of function `yylex'\ngram.y:2030: warning: assignment makes integer from pointer without a cast\ngram.y:2031: warning: assignment makes integer from pointer without a cast\ngram.y:2031: parse error before `:'\ngram.y:2032: warning: assignment makes integer from pointer without a cast\nmake[2]: *** [gram.o] Error 1\nmake[2]: Leaving directory `/usr/local/postgresql-6.4beta/src/backend/parser'\nmake[1]: *** [parser.dir] Error 2\nmake[1]: Leaving directory `/usr/local/postgresql-6.4beta/src/backend'\nmake: *** [all] Error 2\n\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Sat, 25 Jul 1998 21:20:28 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "current snapshot"
},
{
"msg_contents": "> Hi,\n> \n> compilation of the current snapshot \n> \n> 4137013 Jul 25 07:02 postgresql.snapshot.tar.gz\n> \n> dies on linux with the following error:\n> \n> make[2]: Entering directory `/usr/local/postgresql-6.4beta/src/backend/parser'\n> /usr/bin/bison -y -d gram.y\n> conflicts: 1 shift/reduce\n> mv y.tab.c gram.c\n> mv y.tab.h parse.h\n> gcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -Wno-error -c analyze.c -o analyze.o\n> gcc -I../../include -I../../backend -O2 -Wall -Wmissing-prototypes -I.. -Wno-error -c gram.c -o gram.o\n> bison.simple: In function `yyparse':\n> bison.simple:327: warning: implicit declaration of function `yyerror'\n> bison.simple:387: warning: implicit declaration of function `yylex'\n> gram.y:2030: warning: assignment makes integer from pointer without a cast\n> gram.y:2031: warning: assignment makes integer from pointer without a cast\n> gram.y:2031: parse error before `:'\n> gram.y:2032: warning: assignment makes integer from pointer without a cast\n> make[2]: *** [gram.o] Error 1\n> make[2]: Leaving directory `/usr/local/postgresql-6.4beta/src/backend/parser'\n> make[1]: *** [parser.dir] Error 2\n> make[1]: Leaving directory `/usr/local/postgresql-6.4beta/src/backend'\n> make: *** [all] Error 2\n> \n\nI am fixing this now.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 25 Jul 1998 21:15:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot"
}
] |
[
{
"msg_contents": "Hi,\n\nlooks like a typo:\n\n\n*** gram.y.orig Sat Jul 25 21:48:53 1998\n--- gram.y Sat Jul 25 21:49:05 1998\n***************\n*** 2028,2034 ****\n ;\n \n opt_trans: WORK { $$ = NULL; }\n! | TRANSACTION { $$ = NULL: }\n | /*EMPTY*/ { $$ = NULL; }\n ;\n \n--- 2028,2034 ----\n ;\n \n opt_trans: WORK { $$ = NULL; }\n! | TRANSACTION { $$ = NULL; }\n | /*EMPTY*/ { $$ = NULL; }\n ;\n \n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Sat, 25 Jul 1998 21:52:34 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "compilation of current snapshot dies"
},
{
"msg_contents": "Please do not apply this. I have a fix I am applying now.\n\n\n> Hi,\n> \n> looks like a typo:\n> \n> \n> *** gram.y.orig Sat Jul 25 21:48:53 1998\n> --- gram.y Sat Jul 25 21:49:05 1998\n> ***************\n> *** 2028,2034 ****\n> ;\n> \n> opt_trans: WORK { $$ = NULL; }\n> ! | TRANSACTION { $$ = NULL: }\n> | /*EMPTY*/ { $$ = NULL; }\n> ;\n> \n> --- 2028,2034 ----\n> ;\n> \n> opt_trans: WORK { $$ = NULL; }\n> ! | TRANSACTION { $$ = NULL; }\n> | /*EMPTY*/ { $$ = NULL; }\n> ;\n> \n> \n> Edmund\n> -- \n> Edmund Mergl mailto:[email protected]\n> Im Haldenhau 9 http://www.bawue.de/~mergl\n> 70565 Stuttgart fon: +49 711 747503\n> Germany\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 25 Jul 1998 21:15:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] compilation of current snapshot dies"
}
] |
[
{
"msg_contents": "Hi,\n\nmake[2]: Entering directory `/usr/local/postgresql-6.4beta/src/bin/initdb'\nmake[2]: *** No rule to make target `initdb.sh', needed by `initdb'. Stop.\n\nindeed, initdb.sh is missing:\n\nsls:postgres> l /usr/local/pgsql/src/bin/initdb\ntotal 3\ndrwxr-xr-x 2 postgres users 1024 Jul 25 09:00 ./\ndrwxr-xr-x 18 postgres users 1024 Jul 25 09:01 ../\n-rw-r--r-- 1 postgres users 606 Jul 25 09:00 Makefile\n\nEdmund\n-- \nEdmund Mergl mailto:[email protected]\nIm Haldenhau 9 http://www.bawue.de/~mergl\n70565 Stuttgart fon: +49 711 747503\nGermany\n",
"msg_date": "Sat, 25 Jul 1998 22:55:28 +0200",
"msg_from": "Edmund Mergl <[email protected]>",
"msg_from_op": true,
"msg_subject": "current snapshot"
},
{
"msg_contents": "At 10:55 PM 98.7.25 +0200, Edmund Mergl wrote:\n>Hi,\n>\n>make[2]: Entering directory `/usr/local/postgresql-6.4beta/src/bin/initdb'\n>make[2]: *** No rule to make target `initdb.sh', needed by `initdb'. Stop.\n>\n>indeed, initdb.sh is missing:\n>\n>sls:postgres> l /usr/local/pgsql/src/bin/initdb\n>total 3\n>drwxr-xr-x 2 postgres users 1024 Jul 25 09:00 ./\n>drwxr-xr-x 18 postgres users 1024 Jul 25 09:01 ../\n>-rw-r--r-- 1 postgres users 606 Jul 25 09:00 Makefile\n\ninitdb.sh is new and is included in my patches. Marc, can you\nadd it to the source tree?\n--\nTatsuo Ishii\[email protected]\n\n",
"msg_date": "Sun, 26 Jul 1998 11:45:01 +0900",
"msg_from": "[email protected] (Tatsuo Ishii)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot"
},
{
"msg_contents": "On Sun, 26 Jul 1998, Tatsuo Ishii wrote:\n\n> At 10:55 PM 98.7.25 +0200, Edmund Mergl wrote:\n> >Hi,\n> >\n> >make[2]: Entering directory `/usr/local/postgresql-6.4beta/src/bin/initdb'\n> >make[2]: *** No rule to make target `initdb.sh', needed by `initdb'. Stop.\n> >\n> >indeed, initdb.sh is missing:\n> >\n> >sls:postgres> l /usr/local/pgsql/src/bin/initdb\n> >total 3\n> >drwxr-xr-x 2 postgres users 1024 Jul 25 09:00 ./\n> >drwxr-xr-x 18 postgres users 1024 Jul 25 09:01 ../\n> >-rw-r--r-- 1 postgres users 606 Jul 25 09:00 Makefile\n> \n> initdb.sh is new and is included in my patches. Marc, can you\n> add it to the source tree?\n\n\tFixed...anything else I missed? :(\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 26 Jul 1998 01:23:03 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot"
},
{
"msg_contents": "> > initdb.sh is new and is included in my patches. Marc, can you\n> > add it to the source tree?\n> \n> \tFixed...anything else I missed? :(\n\nNow it compiles without any error messages!\n\n-Egon\n\n",
"msg_date": "Sun, 26 Jul 1998 07:00:27 +0200 (MET DST)",
"msg_from": "Egon Schmid <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot"
},
{
"msg_contents": ">> initdb.sh is new and is included in my patches. Marc, can you\n>> add it to the source tree?\n>\n>\tFixed...anything else I missed? :(\n\nPlease add following files in include/catalog.\n\npg_attribute_mb.h\npg_class_mb.h\npg_database_mb.h\n\nAlso, if you have a chance, can you run autoconf and check in the new\nconfigure please?\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Mon, 27 Jul 1998 10:35:09 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": "On Mon, 27 Jul 1998 [email protected] wrote:\n\n> >> initdb.sh is new and is included in my patches. Marc, can you\n> >> add it to the source tree?\n> >\n> >\tFixed...anything else I missed? :(\n> \n> Please add following files in include/catalog.\n> \n> pg_attribute_mb.h\n> pg_class_mb.h\n> pg_database_mb.h\n> \n> Also, if you have a chance, can you run autoconf and check in the new\n> configure please?\n\n\tFixed, and committing as I type...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 27 Jul 1998 00:21:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": ">> Also, if you have a chance, can you run autoconf and check in the new\n>> configure please?\n>\n>\tFixed, and committing as I type...\n\nThanks. But I have to request one more thing. bin/initdb/initdb.sh now \nseems \"doubled\" and I think half of it should be removed.\n\n[srapc451.sra.co.jp]t-ishii{141} diff -c initdb.sh~ initdb.sh\n*** initdb.sh~\tSun Jul 26 13:31:16 1998\n--- initdb.sh\tMon Jul 27 15:18:46 1998\n***************\n*** 435,875 ****\n \tpostgres $PGSQL_OPT template1 > /dev/null\n echo \"vacuum analyze\" | \\\n \tpostgres $PGSQL_OPT template1 > /dev/null\n- \n- #!/bin/sh\n- #-------------------------------------------------------------------------\n- #\n- # initdb.sh--\n- # Create (initialize) a Postgres database system. \n- # \n- # A database system is a collection of Postgres databases all managed\n- # by the same postmaster. \n- #\n- # To create the database system, we create the directory that contains\n- # all its data, create the files that hold the global classes, create\n- # a few other control files for it, and create one database: the\n- # template database.\n- #\n- # The template database is an ordinary Postgres database. Its data\n- # never changes, though. It exists to make it easy for Postgres to \n- # create other databases -- it just copies.\n- #\n- # Optionally, we can skip creating the database system and just create\n- # (or replace) the template database.\n- #\n- # To create all those classes, we run the postgres (backend) program and\n- # feed it data from bki files that are in the Postgres library directory.\n- #\n- # Copyright (c) 1994, Regents of the University of California\n- #\n- #\n- # IDENTIFICATION\n- # $Header: /usr/local/cvsroot/pgsql/src/bin/initdb/initdb.sh,v 1.44 1998/07/26 04:31:16 scrappy Exp $\n- #\n- #-------------------------------------------------------------------------\n- \n- # ----------------\n- # The _fUnKy_..._sTuFf_ gets set when the script is built (with make)\n- # from parameters set in the make file.\n- #\n- # ----------------\n- \n- CMDNAME=`basename $0`\n- \n- MB=__MB__\n- if [ -n \"$MB\" ];then\n- \tMBID=`pg_encoding $MB`\n- fi\n- \n- # Find the default PGLIB directory (the directory that contains miscellaneous \n- # files that are part of Postgres). The user-written program postconfig\n- # outputs variable settings like \"PGLIB=/usr/lib/whatever\". If it doesn't\n- # output a PGLIB value, then there is no default and the user must\n- # specify the pglib option. Postconfig may validly not exist, in which case\n- # our invocation of it silently fails.\n- \n- # The 2>/dev/null is to swallow the \"postconfig: not found\" message if there\n- # is no postconfig.\n- \n- postconfig_result=\"`sh -c postconfig 2>/dev/null`\"\n- if [ ! -z \"$postconfig_result\" ]; then\n- set -a # Make the following variable assignment exported to environment\n- eval \"$postconfig_result\"\n- set +a # back to normal\n- fi\n- \n- # Set defaults:\n- debug=0\n- noclean=0\n- template_only=0\n- POSTGRES_SUPERUSERNAME=$USER\n- \n- while [ \"$#\" -gt 0 ]\n- do\n- # ${ARG#--username=} is not reliable or available on all platforms\n- \n- case \"$1\" in\n- --debug|-d)\n- debug=1\n- echo \"Running with debug mode on.\"\n- ;;\n- --noclean|-n)\n- noclean=1\n- echo \"Running with noclean mode on. \"\n- \"Mistakes will not be cleaned up.\"\n- ;;\n- --template|-t)\n- template_only=1\n- echo \"updating template1 database only.\"\n- ;;\n- --username=*)\n- POSTGRES_SUPERUSERNAME=\"`echo $1 | sed 's/^--username=//'`\"\n- ;;\n- -u)\n- shift\n- POSTGRES_SUPERUSERNAME=\"$1\"\n- ;;\n- -u*)\n- POSTGRES_SUPERUSERNAME=\"`echo $1 | sed 's/^-u//'`\"\n- ;;\n- --pgdata=*)\n- PGDATA=\"`echo $1 | sed 's/^--pgdata=//'`\"\n- ;;\n- -r)\n- shift\n- PGDATA=\"$1\"\n- ;;\n- -r*)\n- PGDATA=\"`echo $1 | sed 's/^-r//'`\"\n- ;;\n- --pglib=*)\n- PGLIB=\"`echo $1 | sed 's/^--pglib=//'`\"\n- ;;\n- -l)\n- shift\n- PGLIB=\"$1\"\n- ;;\n- -l*)\n- PGLIB=\"`echo $1 | sed 's/^-l//'`\"\n- ;;\n- \n- --pgencoding=*)\n- \t\tif [ -z \"$MB\" ];then\n- \t\t\techo \"MB support seems to be disabled\"\n- \t\t\texit 100\n- \t\tfi\n- mb=\"`echo $1 | sed 's/^--pgencoding=//'`\"\n- \t\tMBID=`pg_encoding $mb`\n- \t\tif [ -z \"$MBID\" ];then\n- \t\t\techo \"$mb is not a valid encoding name\"\n- \t\t\texit 100\n- \t\tfi\n- ;;\n- -e)\n- \t\tif [ -z \"$MB\" ];then\n- \t\t\techo \"MB support seems to be disabled\"\n- \t\t\texit 100\n- \t\tfi\n- shift\n- \t\tMBID=`pg_encoding $1`\n- \t\tif [ -z \"$MBID\" ];then\n- \t\t\techo \"$1 is not a valid encoding name\"\n- \t\t\texit 100\n- \t\tfi\n- ;;\n- -e*)\n- \t\tif [ -z \"$MB\" ];then\n- \t\t\techo \"MB support seems to be disabled\"\n- \t\t\texit 100\n- \t\tfi\n- mb=\"`echo $1 | sed 's/^-e//'`\"\n- \t\tMBID=`pg_encoding $mb`\n- \t\tif [ -z \"$MBID\" ];then\n- \t\t\techo \"$mb is not a valid encoding name\"\n- \t\t\texit 100\n- \t\tfi\n- ;;\n- *)\n- echo \"Unrecognized option '$1'. Syntax is:\"\n- \t\tif [ -z \"$MB\" ];then\n- echo \"initdb [-t | --template] [-d | --debug]\" \\\n- \"[-n | --noclean]\" \\\n- \"[-u SUPERUSER | --username=SUPERUSER]\" \\\n- \"[-r DATADIR | --pgdata=DATADIR]\" \\\n- \"[-l LIBDIR | --pglib=LIBDIR]\"\n- \t\telse\n- echo \"initdb [-t | --template] [-d | --debug]\" \\\n- \"[-n | --noclean]\" \\\n- \"[-u SUPERUSER | --username=SUPERUSER]\" \\\n- \"[-r DATADIR | --pgdata=DATADIR]\" \\\n- \"[-l LIBDIR | --pglib=LIBDIR]\" \\\n- \"[-e ENCODING | --pgencoding=ENCODING]\"\n- \t\tfi\n- exit 100\n- esac\n- shift\n- done\n- \n- #-------------------------------------------------------------------------\n- # Make sure he told us where to find the Postgres files.\n- #-------------------------------------------------------------------------\n- if [ -z \"$PGLIB\" ]; then\n- echo \"$CMDNAME does not know where to find the files that make up \"\n- echo \"Postgres (the PGLIB directory). You must identify the PGLIB \"\n- echo \"directory either with a --pglib invocation option, or by \"\n- echo \"setting the PGLIB environment variable, or by having a program \"\n- echo \"called 'postconfig' in your search path that outputs an asignment \"\n- echo \"for PGLIB.\"\n- exit 20\n- fi\n- \n- #-------------------------------------------------------------------------\n- # Make sure he told us where to build the database system\n- #-------------------------------------------------------------------------\n- \n- if [ -z \"$PGDATA\" ]; then\n- echo \"$CMDNAME: You must identify the PGDATA directory, where the data\"\n- echo \"for this database system will reside. Do this with either a\"\n- echo \"--pgdata invocation option or a PGDATA environment variable.\"\n- echo\n- exit 20\n- fi\n- \n- TEMPLATE=$PGLIB/local1_template1.bki.source\n- GLOBAL=$PGLIB/global1.bki.source\n- TEMPLATE_DESCR=$PGLIB/local1_template1.description\n- GLOBAL_DESCR=$PGLIB/global1.description\n- PG_HBA_SAMPLE=$PGLIB/pg_hba.conf.sample\n- PG_GEQO_SAMPLE=$PGLIB/pg_geqo.sample\n- \n- \n- #-------------------------------------------------------------------------\n- # Find the input files\n- #-------------------------------------------------------------------------\n- \n- for PREREQ_FILE in $TEMPLATE $GLOBAL $PG_HBA_SAMPLE; do\n- if [ ! -f $PREREQ_FILE ]; then \n- echo \"$CMDNAME does not find the file '$PREREQ_FILE'.\"\n- echo \"This means you have identified an invalid PGLIB directory.\"\n- echo \"You specify a PGLIB directory with a --pglib invocation \"\n- echo \"option, a PGLIB environment variable, or a postconfig program.\"\n- exit 1\n- fi\n- done\n- \n- echo \"$CMDNAME: using $TEMPLATE as input to create the template database.\"\n- if [ $template_only -eq 0 ]; then\n- echo \"$CMDNAME: using $GLOBAL as input to create the global classes.\"\n- echo \"$CMDNAME: using $PG_HBA_SAMPLE as the host-based authentication\" \\\n- \"control file.\"\n- echo\n- fi \n- \n- #---------------------------------------------------------------------------\n- # Figure out who the Postgres superuser for the new database system will be.\n- #---------------------------------------------------------------------------\n- \n- if [ -z \"$POSTGRES_SUPERUSERNAME\" ]; then \n- echo \"Can't tell what username to use. You don't have the USER\"\n- echo \"environment variable set to your username and didn't specify the \"\n- echo \"--username option\"\n- exit 1\n- fi\n- \n- POSTGRES_SUPERUID=`pg_id $POSTGRES_SUPERUSERNAME`\n- \n- if [ $POSTGRES_SUPERUID = NOUSER ]; then\n- echo \"Valid username not given. You must specify the username for \"\n- echo \"the Postgres superuser for the database system you are \"\n- echo \"initializing, either with the --username option or by default \"\n- echo \"to the USER environment variable.\"\n- exit 10\n- fi\n- \n- if [ $POSTGRES_SUPERUID -ne `pg_id` -a `pg_id` -ne 0 ]; then \n- echo \"Only the unix superuser may initialize a database with a different\"\n- echo \"Postgres superuser. (You must be able to create files that belong\"\n- echo \"to the specified unix user).\"\n- exit 2\n- fi\n- \n- echo \"We are initializing the database system with username\" \\\n- \"$POSTGRES_SUPERUSERNAME (uid=$POSTGRES_SUPERUID).\" \n- echo \"This user will own all the files and must also own the server process.\"\n- echo\n- \n- # -----------------------------------------------------------------------\n- # Create the data directory if necessary\n- # -----------------------------------------------------------------------\n- \n- # umask must disallow access to group, other for files and dirs\n- umask 077\n- \n- if [ -f \"$PGDATA/PG_VERSION\" ]; then\n- if [ $template_only -eq 0 ]; then\n- echo \"$CMDNAME: error: File $PGDATA/PG_VERSION already exists.\"\n- echo \"This probably means initdb has already been run and the \"\n- echo \"database system already exists.\"\n- echo \n- echo \"If you want to create a new database system, either remove \"\n- echo \"the $PGDATA directory or run initdb with a --pgdata option \"\n- echo \"other than $PGDATA.\"\n- exit 1\n- fi\n- else\n- if [ ! -d $PGDATA ]; then\n- echo \"Creating Postgres database system directory $PGDATA\"\n- echo\n- mkdir $PGDATA\n- if [ $? -ne 0 ]; then exit 5; fi\n- fi\n- if [ ! -d $PGDATA/base ]; then\n- echo \"Creating Postgres database system directory $PGDATA/base\"\n- echo\n- mkdir $PGDATA/base\n- if [ $? -ne 0 ]; then exit 5; fi\n- fi\n- fi\n- \n- #----------------------------------------------------------------------------\n- # Create the template1 database\n- #----------------------------------------------------------------------------\n- \n- rm -rf $PGDATA/base/template1\n- mkdir $PGDATA/base/template1\n- \n- if [ \"$debug\" -eq 1 ]; then\n- BACKEND_TALK_ARG=\"-d\"\n- else\n- BACKEND_TALK_ARG=\"-Q\"\n- fi\n- \n- BACKENDARGS=\"-boot -C -F -D$PGDATA $BACKEND_TALK_ARG\"\n- \n- echo \"$CMDNAME: creating template database in $PGDATA/base/template1\"\n- echo \"Running: postgres $BACKENDARGS template1\"\n- \n- cat $TEMPLATE \\\n- | sed -e \"s/postgres PGUID/$POSTGRES_SUPERUSERNAME $POSTGRES_SUPERUID/\" \\\n- -e \"s/PGUID/$POSTGRES_SUPERUID/\" \\\n- | postgres $BACKENDARGS template1\n- \n- if [ $? -ne 0 ]; then\n- echo \"$CMDNAME: could not create template database\"\n- if [ $noclean -eq 0 ]; then\n- echo \"$CMDNAME: cleaning up by wiping out $PGDATA/base/template1\"\n- rm -rf $PGDATA/base/template1\n- else\n- echo \"$CMDNAME: cleanup not done because noclean options was used.\"\n- fi\n- exit 1;\n- fi\n- \n- echo\n- \n- pg_version $PGDATA/base/template1\n- \n- #----------------------------------------------------------------------------\n- # Create the global classes, if requested.\n- #----------------------------------------------------------------------------\n- \n- if [ $template_only -eq 0 ]; then\n- echo \"Creating global classes in $PG_DATA/base\"\n- echo \"Running: postgres $BACKENDARGS template1\"\n- \n- cat $GLOBAL \\\n- | sed -e \"s/postgres PGUID/$POSTGRES_SUPERUSERNAME $POSTGRES_SUPERUID/\" \\\n- -e \"s/PGUID/$POSTGRES_SUPERUID/\" \\\n- | postgres $BACKENDARGS template1\n- \n- if (test $? -ne 0)\n- then\n- echo \"$CMDNAME: could not create global classes.\"\n- if (test $noclean -eq 0); then\n- echo \"$CMDNAME: cleaning up.\"\n- rm -rf $PGDATA\n- else\n- echo \"$CMDNAME: cleanup not done (noclean mode set).\"\n- fi\n- exit 1;\n- fi\n- \n- echo\n- \n- pg_version $PGDATA\n- \n- cp $PG_HBA_SAMPLE $PGDATA/pg_hba.conf\n- cp $PG_GEQO_SAMPLE $PGDATA/pg_geqo.sample\n- \n- echo \"Adding template1 database to pg_database...\"\n- \n- echo \"open pg_database\" > /tmp/create.$$\n- if [ -z \"$MB\" ];then\n- \techo \"insert (template1 $POSTGRES_SUPERUID template1)\" >> /tmp/create.$$\n- else\n- \techo \"insert (template1 $POSTGRES_SUPERUID $MBID template1)\" >> /tmp/create.$$\n- fi\n- #echo \"show\" >> /tmp/create.$$\n- echo \"close pg_database\" >> /tmp/create.$$\n- \n- echo \"Running: postgres $BACKENDARGS template1 < /tmp/create.$$\"\n- \n- postgres $BACKENDARGS template1 < /tmp/create.$$ \n- \n- if [ $? -ne 0 ]; then\n- echo \"$CMDNAME: could not log template database\"\n- if [ $noclean -eq 0 ]; then\n- echo \"$CMDNAME: cleaning up.\"\n- rm -rf $PGDATA\n- else\n- echo \"$CMDNAME: cleanup not done (noclean mode set).\"\n- fi\n- exit 1;\n- fi\n- rm -f /tmp/create.$$\n- fi\n- \n- echo\n- \n- PGSQL_OPT=\"-o /dev/null -F -Q -D$PGDATA\"\n- \n- # If the COPY is first, the VACUUM generates an error, so we vacuum first\n- echo \"vacuuming template1\"\n- echo \"vacuum\" | postgres $PGSQL_OPT template1 > /dev/null\n- \n- echo \"COPY pg_shadow TO '$PGDATA/pg_pwd' USING DELIMITERS '\\\\t'\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- \n- echo \"creating public pg_user view\"\n- echo \"CREATE TABLE xpg_user (\t\t\\\n- \t usename\tname,\t\t\\\n- \t usesysid\tint4,\t\t\\\n- \t usecreatedb\tbool,\t\t\\\n- \t usetrace\tbool,\t\t\\\n- \t usesuper\tbool,\t\t\\\n- \t usecatupd\tbool,\t\t\\\n- \t passwd\t\ttext,\t\t\\\n- \t valuntil\tabstime);\" | postgres $PGSQL_OPT template1 > /dev/null\n- \n- #move it into pg_user\n- echo \"UPDATE pg_class SET relname = 'pg_user' WHERE relname = 'xpg_user';\" |\\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- echo \"UPDATE pg_type SET typname = 'pg_user' WHERE typname = 'xpg_user';\" |\\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- mv $PGDATA/base/template1/xpg_user $PGDATA/base/template1/pg_user\n- \n- echo \"CREATE RULE _RETpg_user AS ON SELECT TO pg_user DO INSTEAD\t\\\n- \t SELECT usename, usesysid, usecreatedb, usetrace,\t\t\\\n- \t usesuper, usecatupd, '********'::text as passwd,\t\\\n- \t\t valuntil FROM pg_shadow;\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- echo \"REVOKE ALL on pg_shadow FROM public\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- \n- echo \"loading pg_description\"\n- echo \"copy pg_description from '$TEMPLATE_DESCR'\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- echo \"copy pg_description from '$GLOBAL_DESCR'\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n- echo \"vacuum analyze\" | \\\n- \tpostgres $PGSQL_OPT template1 > /dev/null\n--- 435,437 ----\n",
"msg_date": "Mon, 27 Jul 1998 15:50:49 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": "\nI've just resync'd my copy of the source. It compiles ok, but initdb is\nnow failing with:\n\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\n\n\nNow, running initdb with --debug, I get:\n\ninitdb: creating template database in /usr/local/dbase/data/base/template1\nRunning: postgres -boot -C -F -D/usr/local/dbase/data -d template1\n<proname name> \n<proowner oid> \n<prolang oid> \n<proisinh bool> \n<proistrusted bool> \n<proiscachable bool> \n<pronargs int2> \n<proretset bool> \n<prorettype oid> \n<proargtypes oid8> \n<probyte_pct int4> \n<properbyte_cpu int4> \n<propercall_cpu int4> \n<prooutin_ratio int4> \n<prosrc text> \n<probin bytea> \n\n> creating bootstrap relation\nbootstrap relation created ok\n> Commit End\ntuple 1242<Inserting value: 'boolin'\nTyp == NULL, typeindex = 3 idx = 0\nboolin End InsertValue\nInserting value: '11'\nTyp == NULL, typeindex = 10 idx = 1\n11 End InsertValue\nInserting value: 'f'\nTyp == NULL, typeindex = 10 idx = 2\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/dbase/data/base/template1\n\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Mon, 27 Jul 1998 11:56:16 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": "On Mon, 27 Jul 1998, Peter T Mount wrote:\n\n> \n> I've just resync'd my copy of the source. It compiles ok, but initdb is\n> now failing with:\n\nIs this the initdb that Tatsuo is reported as being \"doubled\"?\n\n\n",
"msg_date": "Mon, 27 Jul 1998 07:45:06 -0400 (EDT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": "On Mon, 27 Jul 1998, The Hermit Hacker wrote:\n\n> On Mon, 27 Jul 1998, Peter T Mount wrote:\n> \n> > \n> > I've just resync'd my copy of the source. It compiles ok, but initdb is\n> > now failing with:\n> \n> Is this the initdb that Tatsuo is reported as being \"doubled\"?\n\nI'm not sure. It's possible.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Mon, 27 Jul 1998 14:20:39 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": ">> > I've just resync'd my copy of the source. It compiles ok, but initdb is\n>> > now failing with:\n>> \n>> Is this the initdb that Tatsuo is reported as being \"doubled\"?\n>\n>I'm not sure. It's possible.\n\nI've run anon CVS to confirm the problem but got:\n\n[srapc459.sra.co.jp]t-ishii{47} runsocks cvs -z3 update -d -P\nFatal error, aborting.\n: no such user\n\nI have successfully updated my copy yesterday.\n--\nTatsuo Ishii\[email protected]\n",
"msg_date": "Tue, 28 Jul 1998 10:32:41 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
},
{
"msg_contents": "On Tue, 28 Jul 1998 [email protected] wrote:\n\n> >> > I've just resync'd my copy of the source. It compiles ok, but initdb is\n> >> > now failing with:\n> >> \n> >> Is this the initdb that Tatsuo is reported as being \"doubled\"?\n> >\n> >I'm not sure. It's possible.\n> \n> I've run anon CVS to confirm the problem but got:\n> \n> [srapc459.sra.co.jp]t-ishii{47} runsocks cvs -z3 update -d -P\n> Fatal error, aborting.\n> : no such user\n\nI'm getting the same thing here :-(\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n",
"msg_date": "Tue, 28 Jul 1998 11:06:01 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] current snapshot "
}
] |
[
{
"msg_contents": "With fairly current sources (last cvs update on 7/20), I am seeing\noccasional occurrences of \n\tNOTICE: Non-functional update, only first update is performed\nI think this is a bug. The update commands that are triggering this\nmessage *are* getting executed. I looked at the sources and couldn't\neven understand what condition was being tested to generate the message.\nThe source code looks like it's trying to disallow more than one update\nto the same tuple within a transaction, which is so silly that I have to\nbe misreading it...\n\nHere is an example trace of my application's interaction with the\nserver:\n\n// Tuple 134537 is created here:\n\nQUERY: BEGIN TRANSACTION; LOCK marketorderhistory\nRESULT: DELETE 0\n// several other tuples inserted or updated in this transaction\nQUERY: INSERT INTO marketorderhistory (accountID, instrumentID, orderType, numContracts, orderTime, simStatus, realStatus, sequenceNo, orderPrice, orderDivisor, ifDonePrice) VALUES(5, 62, 'S', 5, '1998-05-20 15:20:00 GMT', 'P', '-', nextval('marketorderhistory_Seq'), 11969, 100, 11849)\nRESULT: INSERT 134537 1\nQUERY: END TRANSACTION; NOTIFY marketorderhistory\nRESULT: NOTIFY\n\n// many transactions later, the app wants to update this tuple:\n\nQUERY: BEGIN TRANSACTION; LOCK marketorderhistory\nRESULT: DELETE 0\nQUERY: UPDATE marketorderhistory SET completionTime = '1998-05-21 15:20:00 GMT' WHERE oid = 134537::oid AND completionTime IS NULL; UPDATE marketorderhistory SET simStatus = 'X', sequenceNo = nextval('marketorderhistory_Seq') WHERE oid = 134537::oid\nNOTICE: Non-functional update, only first update is performed\nRESULT: UPDATE 1\n// a couple other tuples inserted or updated\nQUERY: END TRANSACTION; NOTIFY marketorderhistory\nRESULT: NOTIFY\n\nExternal inspection verifies that both updates did take effect.\n\nThe thing that's weird is that this only happens occasionally, say about\ntwice out of every thousand essentially identical updates. I don't\nknow enough about the backend innards to have much chance of figuring\nout what's going on. Any ideas? Is anyone else even seeing this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Jul 1998 19:38:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bogus \"Non-functional update\" notices"
},
{
"msg_contents": "> With fairly current sources (last cvs update on 7/20), I am seeing\n> occasional occurrences of\n> NOTICE: Non-functional update, only first update is performed\n> I think this is a bug. The update commands that are triggering this\n> message *are* getting executed. I looked at the sources and couldn't\n> even understand what condition was being tested to generate the \n> message.\n> The source code looks like it's trying to disallow more than one \n> update to the same tuple within a transaction, which is so silly that \n> I have to be misreading it...\n\nI recall seeing this in the past when two conditions in an update are\nsuch that the second condition will never be significant. Can't remember\nan example, but the regression test has at least one case which provokes\nthis.\n\n - Tom\n",
"msg_date": "Mon, 27 Jul 1998 06:51:37 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices"
},
{
"msg_contents": "I wrote:\n>> With fairly current sources (last cvs update on 7/20), I am seeing\n>> occasional occurrences of\n>>\tNOTICE: Non-functional update, only first update is performed\n\nI have been digging into this some more, and I am getting more and more\nconvinced that there is a significant underlying bug.\n\nWhat I've discovered is that in the cases where this message appears\n(which, again, is only once every few hundred tries) the update scan\nis *finding the same tuple twice*. The second time through, the tuple\nhas already been marked as deleted by the current command, and it is\nthis marking that causes heap_replace to emit the \"Non-functional\nupdate\" warning and return without processing the tuple.\n\nAn example trace is\n\nQUERY: BEGIN TRANSACTION; LOCK marketorderhistory\nRESULT: DELETE 0\nQUERY: UPDATE marketorderhistory SET completionTime = '1998-05-11 20:00:00 GMT' WHERE oid = 34900::oid AND completionTime IS NULL\nNOTICE: heap_replace OID 34900 t_xmin 20270 t_xmax 0 t_cmin 6 t_cmax 0\nNOTICE: heap_replace OID 34900 t_xmin 20270 t_xmax 20496 t_cmin 6 t_cmax 3\nNOTICE: Non-functional update, only first update is performed\nNOTICE: current trans ID 20496 cmd id 3 scan id 3\nRESULT: UPDATE 1\n\n(The \"NOTICE: heap_replace\" lines are from debug code I added to print\nID info about the tuple found by heap_replace. This is printed every\ntime through the routine, just before the non-functional-update test.\nThe \"NOTICE: current trans\" line is printed only if the test triggers.)\n\nIn this particular situation, the only bad consequence is the display\nof a bogus notice message, but it seems to me that having a scan find\nthe same tuple multiple times is a Very Bad Thing. (If the test in\nheap_replace really is intended to clean up after this condition,\nthen it ought not be emitting a message.)\n\nI have only seen this happen when the UPDATE was using an index scan to\nfind the tuples to update (the table in this example has a btree index\non oid). So, somehow the index is returning the same tuple more than\nonce.\n\nI have managed to construct a simple, if not quick, test case that\nrepeatably causes an instance of the bogus message --- it's attached in\nthe form of a pgTcl script. The trace (from my backend with extra\nprintout) looks like\n\n...\nNOTICE: heap_replace OID 87736 t_xmin 113200 t_xmax 0 t_cmin 0 t_cmax 0\nNOTICE: heap_replace OID 87735 t_xmin 113199 t_xmax 0 t_cmin 0 t_cmax 0\nNOTICE: heap_replace OID 87734 t_xmin 113198 t_xmax 0 t_cmin 0 t_cmax 0\nNOTICE: heap_replace OID 87734 t_xmin 113198 t_xmax 113601 t_cmin 0 t_cmax 0\nNOTICE: Non-functional update, only first update is performed\nNOTICE: current trans ID 113601 cmd id 0 scan id 0\nNOTICE: heap_replace OID 87733 t_xmin 113197 t_xmax 0 t_cmin 0 t_cmax 0\nNOTICE: heap_replace OID 87732 t_xmin 113196 t_xmax 0 t_cmin 0 t_cmax 0\n...\nwhere the failure occurs at the 200th UPDATE command.\n\n\t\t\tregards, tom lane\n\n#!/usr/local/pgsql/bin/pgtclsh\n\nset pgconn [pg_connect play]\n\nset res [pg_exec $pgconn \\\n\t\"DROP TABLE updatebug\"]\npg_result $res -clear\n\nset res [pg_exec $pgconn \\\n\t\"CREATE TABLE updatebug (key int4 not null, val int4)\"]\npg_result $res -clear\n\nset res [pg_exec $pgconn \\\n\t\"CREATE UNIQUE INDEX updatebug_i ON updatebug USING btree(key)\"]\npg_result $res -clear\n\nfor {set i 0} {$i <= 10000} {incr i} {\n set res [pg_exec $pgconn \"INSERT INTO updatebug VALUES($i, NULL)\"]\n pg_result $res -clear\n}\n\n# Vacuum to ensure that optimizer will decide to use index for updates...\nset res [pg_exec $pgconn \\\n\t\"VACUUM VERBOSE ANALYZE updatebug\"]\npg_result $res -clear\n\nputs \"table built...\"\n\nfor {set i 10000} {$i >= 0} {incr i -1} {\n set res [pg_exec $pgconn \\\n\t \"UPDATE updatebug SET val = 1 WHERE key = $i\"]\n pg_result $res -clear\n}\n",
"msg_date": "Mon, 27 Jul 1998 20:10:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I have only seen this happen when the UPDATE was using an index scan to\n> find the tuples to update (the table in this example has a btree index\n> on oid). So, somehow the index is returning the same tuple more than\n> once.\n\nIn UPDATE backend inserts index tuple for new version of heap tuple \nand adjusts all index scans affected by this insertion.\nSomething is wrong in nbtscan.c:_bt_adjscans()...\n\nVadim\n",
"msg_date": "Tue, 28 Jul 1998 10:57:35 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> In UPDATE backend inserts index tuple for new version of heap tuple \n> and adjusts all index scans affected by this insertion.\n> Something is wrong in nbtscan.c:_bt_adjscans()...\n\nCould be; maybe there's one boundary case that fails to advance the\nindex scan? I hope there's someone who's looked at nbtree recently\nwho can take the time to debug this.\n\nAnother thing that struck me while looking at the update code is that\nan update deletes the old tuple value, then inserts the new value,\nbut it doesn't bother to delete any old index entries pointing at the\nold tuple. ISTM that after a while, there are going to be a lot of old\nindex entries pointing at dead tuples ... or, perhaps, at *some other*\nlive tuple, if the space the dead tuple occupied has been reused for\nsomething else. This certainly seems to present a risk of returning\nthe wrong tuple. I looked through the code to find out how such an\nerror is prevented, and didn't find anything. But maybe I just don't\nknow where to look.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Jul 1998 10:14:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > In UPDATE backend inserts index tuple for new version of heap tuple\n> > and adjusts all index scans affected by this insertion.\n> > Something is wrong in nbtscan.c:_bt_adjscans()...\n> \n> Could be; maybe there's one boundary case that fails to advance the\n> index scan? I hope there's someone who's looked at nbtree recently\n> who can take the time to debug this.\n\nI'll try to look there...\n\n> Another thing that struck me while looking at the update code is that\n> an update deletes the old tuple value, then inserts the new value,\n> but it doesn't bother to delete any old index entries pointing at the\n> old tuple. ISTM that after a while, there are going to be a lot of old\n> index entries pointing at dead tuples ... or, perhaps, at *some other*\n> live tuple, if the space the dead tuple occupied has been reused for\n> something else. This certainly seems to present a risk of returning\n> the wrong tuple. I looked through the code to find out how such an\n> error is prevented, and didn't find anything. But maybe I just don't\n> know where to look.\n\nVacuum deletes index tuples before deleting heap ones...\n\nVadim\n",
"msg_date": "Wed, 29 Jul 1998 09:16:05 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> Another thing that struck me while looking at the update code is that\n>> an update deletes the old tuple value, then inserts the new value,\n>> but it doesn't bother to delete any old index entries pointing at the\n>> old tuple. ISTM that after a while, there are going to be a lot of old\n>> index entries pointing at dead tuples ... or, perhaps, at *some other*\n>> live tuple, if the space the dead tuple occupied has been reused for\n>> something else.\n\n> Vacuum deletes index tuples before deleting heap ones...\n\nRight, but until you've done a vacuum, what's stopping the code from\nreturning wrong tuples? I assume this stuff actually works, I just\ncouldn't see where the dead index entries get rejected.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Jul 1998 10:40:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices "
},
{
"msg_contents": "> Vadim Mikheev <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Another thing that struck me while looking at the update code is that\n> >> an update deletes the old tuple value, then inserts the new value,\n> >> but it doesn't bother to delete any old index entries pointing at the\n> >> old tuple. ISTM that after a while, there are going to be a lot of old\n> >> index entries pointing at dead tuples ... or, perhaps, at *some other*\n> >> live tuple, if the space the dead tuple occupied has been reused for\n> >> something else.\n> \n> > Vacuum deletes index tuples before deleting heap ones...\n> \n> Right, but until you've done a vacuum, what's stopping the code from\n> returning wrong tuples? I assume this stuff actually works, I just\n> couldn't see where the dead index entries get rejected.\n> \n> \t\t\tregards, tom lane\n> \n\nWithout checking the code, I suspect that dead rows are visible though the\nindex (they had to be to make time travel work), but do not match the time\nqual so are not \"seen\".\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Wed, 29 Jul 1998 09:05:54 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices"
},
{
"msg_contents": "David Gould wrote:\n> \n> >\n> > > Vacuum deletes index tuples before deleting heap ones...\n> >\n> > Right, but until you've done a vacuum, what's stopping the code from\n> > returning wrong tuples? I assume this stuff actually works, I just\n> > couldn't see where the dead index entries get rejected.\n> >\n> \n> Without checking the code, I suspect that dead rows are visible though the\n> index (they had to be to make time travel work), but do not match the time\n> qual so are not \"seen\".\n\nYes. Backend sees that xmax of heap tuple is committed and\ndon't return tuple...\n\nBTW, I've fixed SUBJ. Scan adjustment didn't work when\nindex page was splitted. I get rid of ON INSERT adjustment\nat all: now backend uses heap tid of current index tuple to\nrestore current scan position before searching for the\nnext index tuple. (This will also allow us unlock index\npage after we got index tuple and work in heap and so\nindex readers will not block writers ... when LLL\nwill be implemented -:).\n\nThe bug was more serious than \"non-functional update\"\nwhen backend read index tuples twice: in some cases \nscan didn't return good tuples at all!\n\ndrop table bt;\ncreate table bt (x int);\ncopy bt from '/var/home/postgres/My/Btree/ADJ/UNIQ';\n-- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-- 1000 records with x in [1,1000]\n--\ncreate index bti on bt (x);\nupdate bt set x = x where x <= 200;\nupdate bt set x = x where x > 200 and x <= 210;\n--\n-- ONLY 4 tuples will be updated by last update!\n--\n\nI'll prepare patch for 6.3...\n\nVadim\n",
"msg_date": "Thu, 30 Jul 1998 09:16:57 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bogus \"Non-functional update\" notices"
}
] |
[
{
"msg_contents": "Here are additional patches for the UnixWare 7 port.\n\nSummary of changes:\n\nIn pqcomm.h, use the SUN_LEN macro if it is defined to calculate the size of \nthe sockaddr_un structure.\n\nIn unixware.h, drop the use of the UNIXWARE macro. Everything can be handled \nwith the USE_UNIVEL_CC and DISABLE_COMPLEX_MACRO macros.\n\nIn s_lock.h, remove the reference to the UNIXWARE macro (see above).\n\nIn the unixware template, add the YFLAGS:-d line.\n\nIn various makefile templates, add (or cleanup) unixware and univel port \nspecific information.\n\n*** src/include/libpq/pqcomm.h.orig\tFri Jul 24 19:08:59 1998\n--- src/include/libpq/pqcomm.h\tFri Jul 24 19:10:07 1998\n***************\n*** 34,42 ****\n--- 34,47 ----\n \n /* Configure the UNIX socket address for the well known port. */\n \n+ #if defined(SUN_LEN)\n #define UNIXSOCK_PATH(sun,port) \\\n+ \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), SUN_LEN(&(sun)))\n+ #else\n+ #define UNIXSOCK_PATH(sun,port) \\\n \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)), \\\n \t strlen((sun).sun_path)+ offsetof(struct sockaddr_un, sun_path))\n+ #endif\n \n /*\n *\t\tWe do this because sun_len is in BSD's struct, while others don't.\n*** src/include/port/unixware.h.orig\tFri Jul 24 19:08:59 1998\n--- src/include/port/unixware.h\tFri Jul 24 19:10:07 1998\n***************\n*** 5,16 ****\n \n #define HAS_TEST_AND_SET\n #define NEED_I386_TAS_ASM\n /***************************************\n * Define this if you are compiling with\n * the native UNIXWARE C compiler.\n ***************************************/\n! #define UNIXWARE\n typedef unsigned char slock_t;\n \n /***************************************************************\n * The following include will get the needed prototype for the \n--- 5,20 ----\n \n #define HAS_TEST_AND_SET\n #define NEED_I386_TAS_ASM\n+ \n /***************************************\n * Define this if you are compiling with\n * the native UNIXWARE C compiler.\n ***************************************/\n! #define USE_UNIVEL_CC\n! \n typedef unsigned char slock_t;\n+ \n+ #define DISABLE_COMPLEX_MACRO\n \n /***************************************************************\n * The following include will get the needed prototype for the \n*** src/include/storage/s_lock.h.orig\tFri Jul 24 19:08:59 1998\n--- src/include/storage/s_lock.h\tFri Jul 24 19:10:07 1998\n***************\n*** 188,194 ****\n #if defined(NEED_I386_TAS_ASM)\n /* non gcc i386 based things */\n \n! #if defined(USE_UNIVEL_CC) || defined(UNIXWARE)\n #define TAS(lock)\ttas(lock)\n \n asm int \n--- 188,194 ----\n #if defined(NEED_I386_TAS_ASM)\n /* non gcc i386 based things */\n \n! #if defined(USE_UNIVEL_CC)\n #define TAS(lock)\ttas(lock)\n \n asm int \n***************\n*** 203,209 ****\n \tpopl\t%ebx\n }\n \n! #endif /* USE_UNIVEL_CC || UNIXWARE */\n \n #endif /* NEED_I386_TAS_ASM */\n \n--- 203,209 ----\n \tpopl\t%ebx\n }\n \n! #endif /* USE_UNIVEL_CC */\n \n #endif /* NEED_I386_TAS_ASM */\n \n*** src/interfaces/libpgtcl/Makefile.in.orig\tFri Jul 24 19:09:00 1998\n--- src/interfaces/libpgtcl/Makefile.in\tFri Jul 24 19:10:08 1998\n***************\n*** 66,71 ****\n--- 66,78 ----\n CFLAGS\t\t+= $(CFLAGS_SL)\n endif\n \n+ ifeq ($(PORTNAME), unixware)\n+ install-shlib-dep\t:= install-shlib\n+ shlib\t\t\t:= libpgtcl.so.1\n+ LDFLAGS_SL\t\t= -G -z text\n+ CFLAGS\t\t+= $(CFLAGS_SL)\n+ endif\n+ \n ifeq ($(PORTNAME), univel)\n install-shlib-dep\t:= install-shlib\n shlib\t\t\t:= libpgtcl.so.1\n*** src/interfaces/libpq/c.h.orig\tSat Jul 25 00:18:45 1998\n--- src/interfaces/libpq/c.h\tSat Jul 25 00:19:15 1998\n***************\n*** 63,70 ****\n #define false\t((char) 0)\n #define true\t((char) 1)\n #ifndef __cplusplus\n typedef char bool;\n! \n #endif\t\t\t\t\t\t\t/* not C++ */\n typedef bool *BoolPtr;\n \n--- 63,71 ----\n #define false\t((char) 0)\n #define true\t((char) 1)\n #ifndef __cplusplus\n+ #ifndef bool\n typedef char bool;\n! #endif\n #endif\t\t\t\t\t\t\t/* not C++ */\n typedef bool *BoolPtr;\n \n*** src/interfaces/libpq/Makefile.in.orig\tFri Jul 24 19:09:00 1998\n--- src/interfaces/libpq/Makefile.in\tFri Jul 24 19:10:08 1998\n***************\n*** 73,81 ****\n CFLAGS += $(CFLAGS_SL)\n endif\n \n ifeq ($(PORTNAME), univel)\n install-shlib-dep := install-shlib\n! shlib := libpq.so.1\n LDFLAGS_SL = -G -z text\n CFLAGS += $(CFLAGS_SL)\n endif\n--- 73,88 ----\n CFLAGS += $(CFLAGS_SL)\n endif\n \n+ ifeq ($(PORTNAME), unixware)\n+ install-shlib-dep := install-shlib\n+ shlib := libpq.so.$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n+ LDFLAGS_SL = -G -z text\n+ CFLAGS += $(CFLAGS_SL)\n+ endif\n+ \n ifeq ($(PORTNAME), univel)\n install-shlib-dep := install-shlib\n! shlib := libpq.so.$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n LDFLAGS_SL = -G -z text\n CFLAGS += $(CFLAGS_SL)\n endif\n*** src/interfaces/libpq++/Makefile.orig\tFri Jul 24 19:09:00 1998\n--- src/interfaces/libpq++/Makefile\tFri Jul 24 19:10:08 1998\n***************\n*** 56,61 ****\n--- 56,75 ----\n CFLAGS += $(CFLAGS_SL)\n endif\n \n+ ifeq ($(PORTNAME), unixware)\n+ install-shlib-dep := install-shlib\n+ shlib := libpq.so.1\n+ LDFLAGS_SL = -G -z text\n+ CFLAGS += $(CFLAGS_SL)\n+ endif\n+ \n+ ifeq ($(PORTNAME), univel)\n+ install-shlib-dep := install-shlib\n+ shlib := libpq.so.1\n+ LDFLAGS_SL = -G -z text\n+ CFLAGS += $(CFLAGS_SL)\n+ endif\n+ \n ifeq ($(PORTNAME), hpux)\n install-shlib-dep := install-shlib\n shlib := libpq.sl\n*** src/template/unixware.orig\tFri Jul 24 19:09:00 1998\n--- src/template/unixware\tFri Jul 24 19:10:08 1998\n***************\n*** 1,8 ****\n AROPT:crs\n CFLAGS:-Xa -v -O -K i486,host,inline,loop_unroll,alloca -Dsvr4\n SHARED_LIB:-K PIC\n! SRCH_INC:\n! SRCH_LIB:\n USE_LOCALE:no\n DLSUFFIX:.so\n CC:cc\n--- 1,9 ----\n AROPT:crs\n CFLAGS:-Xa -v -O -K i486,host,inline,loop_unroll,alloca -Dsvr4\n SHARED_LIB:-K PIC\n! SRCH_INC:/opt/include\n! SRCH_LIB:/opt/lib\n USE_LOCALE:no\n DLSUFFIX:.so\n CC:cc\n+ YFLAGS:-d\n\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n\n",
"msg_date": "Sat, 25 Jul 1998 23:34:41 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Additional UnixWare 7 patches for latest snapshot."
}
] |
[
{
"msg_contents": "Me again (The guy with the huge tables)\n\nPostgres is started with:\nexec /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data -o '-F -B \n64 -S 262144 -o /var/log/postgres/errors' &\n\nThe important part here is the -S 262144. I want it to use up to 256 megs\nof RAM for sorting. Datasize limits for user postgres are currently set at\n512M.\n\nThe table in question currently has 5103416 rows in it. Very simple\nstruct:\nword_id int4\nurl_id int4\nword_count int2\n\nNow when I run:\npsql -c 'select * from word_detail order by word_id' searchengine>test.out\nFATAL 1: palloc failure: memory exhausted\n\nHere is a top entry:\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n 670 postgres105 0 411M 195M RUN 5:27 73.66% 73.66% postgres\n \nSoon thereafter it croaks after running out of swap. I expected that it\nshould have stopped gobbling up memory once it hit the 256 meg mark (sort\nbuffer max size)\n\nOf course the bug here might be in the user :)\n\nIt's an Intel machine running FreeBSD 2.2.6-STABLE. Has lots of RAM,\n320megs or something like that I believe.\n\nOne question about the guts. What algo does postgres run for the internal\nsort. Is it quicksort then externally mergesort, or is it mergesort all\nthe way...?\n\nThanks\n-Mike\n\n",
"msg_date": "Sun, 26 Jul 1998 03:06:13 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible bug in sort/sort question"
}
] |
[
{
"msg_contents": "subscribe\n\n\n\n",
"msg_date": "Sun, 26 Jul 1998 15:17:00 +0200 (CEST)",
"msg_from": "Cyril VELTER <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "With current development sources, I am noticing that if I delete a large\nnumber of entries from a table, the next vacuum on the table will spend\nan *unreasonable* amount of time vacuuming the indexes on the table.\n\nHere's a sample vacuum log:\n\nNOTICE: --Relation marketorderhistory--\nNOTICE: Pages 1016: Changed 0, Reapped 1016, Empty 0, New 0; Tup 8983: Vac 63439, Crash 5, UnUsed 234, MinLen 92, MaxLen 120; Re-using: Free/Avail. Space 7013200/7009228; EndEmpty/Avail. Pages 0/1015. Elapsed 1/2 sec.\nNOTICE: Ind marketorderhistory_sequenceno_i: Pages 550; Tuples 8983: Deleted 63439. Elapsed 7876/2684 sec.\nNOTICE: Ind marketorderhistory_completionti: Pages 312; Tuples 8983: Deleted 63439. Elapsed 0/51 sec.\nNOTICE: Ind marketorderhistory_ordertime_in: Pages 273; Tuples 8983: Deleted 63439. Elapsed 1/21 sec.\nNOTICE: Ind marketorderhistory_oid_index: Pages 454; Tuples 8983: Deleted 63439. Elapsed 5047/1861 sec.\nNOTICE: Rel marketorderhistory: Pages: 1016 --> 129; Tuple(s) moved: 8983. Elapsed 2/22 sec.\nNOTICE: Ind marketorderhistory_sequenceno_i: Pages 550; Tuples 8983: Deleted 8983. Elapsed 0/3 sec.\nNOTICE: Ind marketorderhistory_completionti: Pages 312; Tuples 8983: Deleted 8983. Elapsed 0/2 sec.\nNOTICE: Ind marketorderhistory_ordertime_in: Pages 273; Tuples 8983: Deleted 8983. Elapsed 0/3 sec.\nNOTICE: Ind marketorderhistory_oid_index: Pages 454; Tuples 8983: Deleted 8983. Elapsed 1/3 sec.\n\nThree and a half hours to vacuum a table of a few thousand entries isn't\nacceptable performance in my book. You could drop and recreate these\nindexes in four seconds each (measured result); so what's going on here?\n\nIn case it helps, the indices in question are defined like so:\n\nCREATE UNIQUE INDEX MarketOrderHistory_oid_Index on MarketOrderHistory\nUSING btree (oid);\nCREATE INDEX MarketOrderHistory_orderTime_Index ON MarketOrderHistory\nUSING btree (orderTime);\nCREATE INDEX MarketOrderHistory_completionTime_Index ON MarketOrderHistory\nUSING btree (completionTime);\nCREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index ON MarketOrderHistory\nUSING btree (sequenceNo);\n\nwhere orderTime and completionTime are datetime fields, sequenceNo is int4.\n\nOne thing that jumps out at me is that the indexes that are taking a\nlong time to process are unique indexes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Jul 1998 11:00:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuuming an index takes way too long"
}
] |
[
{
"msg_contents": "DROP INDEX fails on overlength table names:\n\ntgl=> CREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index\ntgl-> ON MarketOrderHistory USING btree (sequenceNo);\nCREATE\ntgl=> DROP INDEX MarketOrderHistory_sequenceNo_Index;\nERROR: pg_ownercheck: class \"marketorderhistory_sequenceno_index\" not found\ntgl=> DROP INDEX MarketOrderHistory_sequenceNo_I;\nDROP\n\nEvidently DROP INDEX is using a second-rate way of reducing the given\nname to canonical form for comparisons.\n\nSome further experimentation shows that CREATE TABLE won't let you\ncreate a relation name >= 32 characters in the first place. So there's\nsome inconsistency about what's done with overlength names.\n\nIt seems to me that we ought to have consistent treatment of long names,\nand the treatment I like is the one that CREATE INDEX is using:\nsilently truncate the given name to what we can handle, and accept\nit as long as the truncated form is unique. This is the time-honored\nway of handling overlength names in compilers, and it works well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Jul 1998 11:10:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Minor bug: inconsistent handling of overlength names"
},
{
"msg_contents": "On Sun, 26 Jul 1998, Tom Lane wrote:\n\n> DROP INDEX fails on overlength table names:\n> \n> tgl=> CREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index\n> tgl-> ON MarketOrderHistory USING btree (sequenceNo);\n> CREATE\n> tgl=> DROP INDEX MarketOrderHistory_sequenceNo_Index;\n> ERROR: pg_ownercheck: class \"marketorderhistory_sequenceno_index\" not found\n> tgl=> DROP INDEX MarketOrderHistory_sequenceNo_I;\n> DROP\n> \n> Evidently DROP INDEX is using a second-rate way of reducing the given\n> name to canonical form for comparisons.\n> \n> Some further experimentation shows that CREATE TABLE won't let you\n> create a relation name >= 32 characters in the first place. So there's\n> some inconsistency about what's done with overlength names.\n> \n> It seems to me that we ought to have consistent treatment of long names,\n> and the treatment I like is the one that CREATE INDEX is using:\n> silently truncate the given name to what we can handle, and accept\n> it as long as the truncated form is unique. This is the time-honored\n> way of handling overlength names in compilers, and it works well.\n\nSame thing goes for user-names. I recently created a user named (for the \nsake of example) '1234567890', using CREATE USER. No complaints here, but \ntrying to connect with user '1234567890' fails. You can connect with \n'12345678'.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n",
"msg_date": "Sun, 26 Jul 1998 21:43:17 +0200 (MET DST)",
"msg_from": "Maarten Boekhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names"
},
{
"msg_contents": "I believe Tom Lane has fixed this.\n\n\n> On Sun, 26 Jul 1998, Tom Lane wrote:\n> \n> > DROP INDEX fails on overlength table names:\n> > \n> > tgl=> CREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index\n> > tgl-> ON MarketOrderHistory USING btree (sequenceNo);\n> > CREATE\n> > tgl=> DROP INDEX MarketOrderHistory_sequenceNo_Index;\n> > ERROR: pg_ownercheck: class \"marketorderhistory_sequenceno_index\" not found\n> > tgl=> DROP INDEX MarketOrderHistory_sequenceNo_I;\n> > DROP\n> > \n> > Evidently DROP INDEX is using a second-rate way of reducing the given\n> > name to canonical form for comparisons.\n> > \n> > Some further experimentation shows that CREATE TABLE won't let you\n> > create a relation name >= 32 characters in the first place. So there's\n> > some inconsistency about what's done with overlength names.\n> > \n> > It seems to me that we ought to have consistent treatment of long names,\n> > and the treatment I like is the one that CREATE INDEX is using:\n> > silently truncate the given name to what we can handle, and accept\n> > it as long as the truncated form is unique. This is the time-honored\n> > way of handling overlength names in compilers, and it works well.\n> \n> Same thing goes for user-names. I recently created a user named (for the \n> sake of example) '1234567890', using CREATE USER. No complaints here, but \n> trying to connect with user '1234567890' fails. You can connect with \n> '12345678'.\n> \n> Maarten\n> \n> _____________________________________________________________________________\n> | TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n> | Department of Electrical Engineering |\n> | Computer Architecture and Digital Technique section |\n> | [email protected] |\n> -----------------------------------------------------------------------------\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 22 Aug 1998 06:54:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I believe Tom Lane has fixed this.\n>> On Sun, 26 Jul 1998, Tom Lane wrote:\n>>>> DROP INDEX fails on overlength table names:\n\nNo, I have *not* fixed it, I only complained about it ;-).\n\nI like your idea of truncating names to 31 characters in the parser;\nthis should solve the problem globally. (Most likely, if DROP INDEX\nhas a bug then the same bug may exist elsewhere as well.)\n\nIs the limit 31 not 32?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Aug 1998 11:59:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I believe Tom Lane has fixed this.\n> >> On Sun, 26 Jul 1998, Tom Lane wrote:\n> >>>> DROP INDEX fails on overlength table names:\n> \n> No, I have *not* fixed it, I only complained about it ;-).\n> \n> I like your idea of truncating names to 31 characters in the parser;\n> this should solve the problem globally. (Most likely, if DROP INDEX\n> has a bug then the same bug may exist elsewhere as well.)\n> \n> Is the limit 31 not 32?\n\n31. Used to be 32 around 6.0, but all the code to compare non-null\nterminated strings in the backend just wasn't worth it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 22 Aug 1998 17:15:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names"
},
{
"msg_contents": "> DROP INDEX fails on overlength table names:\n> \n> tgl=> CREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index\n> tgl-> ON MarketOrderHistory USING btree (sequenceNo);\n> CREATE\n> tgl=> DROP INDEX MarketOrderHistory_sequenceNo_Index;\n> ERROR: pg_ownercheck: class \"marketorderhistory_sequenceno_index\" not found\n> tgl=> DROP INDEX MarketOrderHistory_sequenceNo_I;\n> DROP\n> \n> Evidently DROP INDEX is using a second-rate way of reducing the given\n> name to canonical form for comparisons.\n> \n> Some further experimentation shows that CREATE TABLE won't let you\n> create a relation name >= 32 characters in the first place. So there's\n> some inconsistency about what's done with overlength names.\n> \n> It seems to me that we ought to have consistent treatment of long names,\n> and the treatment I like is the one that CREATE INDEX is using:\n> silently truncate the given name to what we can handle, and accept\n> it as long as the truncated form is unique. This is the time-honored\n> way of handling overlength names in compilers, and it works well.\n\nOK. I have modified scan.l so it now truncates identifiers over\nNAMEDATALEN, so this should fix it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 28 Aug 1998 22:36:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I believe Tom Lane has fixed this.\n> >> On Sun, 26 Jul 1998, Tom Lane wrote:\n> >>>> DROP INDEX fails on overlength table names:\n> \n> No, I have *not* fixed it, I only complained about it ;-).\n> \n> I like your idea of truncating names to 31 characters in the parser;\n> this should solve the problem globally. (Most likely, if DROP INDEX\n> has a bug then the same bug may exist elsewhere as well.)\n> \n> Is the limit 31 not 32?\n\nDone.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Fri, 28 Aug 1998 22:55:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor bug: inconsistent handling of overlength names"
}
] |
[
{
"msg_contents": "I've been thinking about changing the libpq library to reduce and\ncentralize its dependence on printing messages to stderr. This\nseems like a good idea on general principles, and it will be vital\nif people want to run PostgreSQL clients on WIN32 (as Magnus Hagander's\nrecent patches make possible). On a lot of WIN32 C compilers, output\nto stderr goes into the bit-bucket, or causes the app to crash, or\neven prevents it from being built in the first place.\n\nThere are two separate issues to address. One is the handling of\nNOTICE messages from the backend (such as EXPLAIN outputs). libpq\nis hardwired to dump these onto stderr. I'm evidently not the first\nperson to be dissatisfied with that --- fe-exec.c contains\n\n\t/*\n\t * Should we really be doing this?\tThese notices\n\t * are not important enough for us to presume to\n\t * put them on stderr.\tMaybe the caller should\n\t * decide whether to put them on stderr or not.\n\t * BJH 96.12.27\n\t */\n\tfprintf(stderr, \"%s\", conn->errorMessage);\n\nWhat I propose we do is invent a callback hook that the application\ncan set to obtain control when a notice is received. The default\nhook function will just print the message to stderr as before, but\napplications can override the default to do something else. I suggest\na hook function signature like this\n\n\tvoid noticeProcessor (void * arg, const char * message)\n\nand a new libpq accessor function\n\n\tvoid PQsetNoticeProcessor (PGconn * conn,\n\t\tvoid (*noticeProcessor) (void * arg, const char * message),\n\t\tvoid * arg)\n\nThe \"arg\" pointer is saved by PQsetNoticeProcessor and subsequently\npassed to the notice processor. This gives a way for the notice\nprocessor to get to any application-dependent state associated with\nthe connection.\n\nThe other issue is that libpq has various internal error messages\nthat it willy-nilly prints on stderr, rather than handing back to the\napplication via the PQerrorMessage interface. Some of these can\nprobably be eliminated or converted into PQerrorMessage returns.\nIf any remain after a cleanup pass, I'm inclined to invent an\n\"errorProcessor\" hook just like the noticeProcessor hook described\nabove, so that the application can control what happens to the messages.\n\nDoes anyone have any objections or better ideas? None of this will\naffect the frontend/backend protocol, it'll just make libpq more\nadaptable to frontend environments where writing to stderr isn't a\nfriendly thing to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Jul 1998 17:19:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed cleanup of libpq's use of stderr"
}
] |
[
{
"msg_contents": "> Date: Sun, 26 Jul 1998 21:43:17 +0200 (MET DST)\n> From: Maarten Boekhold <[email protected]>\n> Subject: Re: [HACKERS] Minor bug: inconsistent handling of overlength names\n> \n> On Sun, 26 Jul 1998, Tom Lane wrote:\n> \n> > DROP INDEX fails on overlength table names:\n> >\n> > tgl=> CREATE UNIQUE INDEX MarketOrderHistory_sequenceNo_Index\n> > tgl-> ON MarketOrderHistory USING btree (sequenceNo);\n> > CREATE\n> > tgl=> DROP INDEX MarketOrderHistory_sequenceNo_Index;\n> > ERROR: pg_ownercheck: class \"marketorderhistory_sequenceno_index\" not found\n> > tgl=> DROP INDEX MarketOrderHistory_sequenceNo_I;\n> > DROP\n> >\n> > Evidently DROP INDEX is using a second-rate way of reducing the given\n> > name to canonical form for comparisons.\n> >\n> > Some further experimentation shows that CREATE TABLE won't let you\n> > create a relation name >= 32 characters in the first place. So there's\n> > some inconsistency about what's done with overlength names.\n> >\n> > It seems to me that we ought to have consistent treatment of long names,\n> > and the treatment I like is the one that CREATE INDEX is using:\n> > silently truncate the given name to what we can handle, and accept\n> > it as long as the truncated form is unique. This is the time-honored\n> > way of handling overlength names in compilers, and it works well.\n> \n> Same thing goes for user-names. I recently created a user named (for the\n> sake of example) '1234567890', using CREATE USER. No complaints here, but\n> trying to connect with user '1234567890' fails. You can connect with\n> '12345678'.\n\nAnd the actual username can be 32 bytes ;(\n\ninvestor=> \\d pg_user\n\nTable = pg_user\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| usename | name \n| 32 |\n| usesysid | int4 \n| 4 |\n| usecreatedb | bool \n| 1 |\n| usetrace | bool \n| 1 |\n| usesuper | bool \n| 1 |\n| usecatupd | bool \n| 1 |\n| passwd | text \n| var |\n| valuntil | abstime \n| 4 |\n+----------------------------------+----------------------------------+-------+\n\n----------------\nHannu\n",
"msg_date": "Mon, 27 Jul 1998 11:46:53 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hackers-digest V1 #894"
}
] |
[
{
"msg_contents": "As promised, I am posting an analysis of the current OR clause issues. \nThey are causing people problems, and this is something that I want to\naddress for 6.4.\n\nWe have two problems. First, indexes are not used with OR's. This is a\nserious problem, with no good workaround. I have looked at the code,\nand there are two places that need changes. First, there is much code\nin the optimizer to handle OR's, but it was turned off because it did\nnot work. There is also no support in the executor to handle multiple\nOR values when using indexes. I have fixed the optimizer so it can now\nidentify OR clauses and handle them properly:\n\t\n\ttest=> explain select * from test where x=3 or x=4;\n\tNOTICE: equal: don't know whether nodes of type 200 are equal\n\tNOTICE: QUERY PLAN:\n\t\n\tIndex Scan using i_test on test (cost=4.10 size=1 width=4)\n\nAs you can see, I am getting a NOTICE I have to check into. Also, the\nexecutor is only returning the FIRST of the OR conditions, because I\nhave not yet added code to nodeIndexscan.c to handle multiple values.\n\nThis code is not installed in the main source tree. I will complete my\ncleanups and tests, and install it. I may need help with\nnodeIndexscan.c. My idea is to hook up multiple ScanKeys, and to move\non to the next one when the first finishes. Perhaps someone (Vadim?)\ncould help as I am a little lost in how to do that. Pointers to similar\ncode would help.\n\nSecond issue is the palloc failure on complex OR conditions caused by\ncnf-ifying the qualification (cnfify()). I believe there may be a way\nto restrict cnfify'ing the entire qualification. Perhaps we can prevent\nfull cnf'ification when multiple OR's are supplied, and each is a\nconstant. I will have to check into this, but if others have ideas, I\nwould like to hear them.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Mon, 27 Jul 1998 11:04:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "OR clause issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> We have two problems. First, indexes are not used with OR's. This is a\n> serious problem, with no good workaround. I have looked at the code,\n> and there are two places that need changes. First, there is much code\n> in the optimizer to handle OR's, but it was turned off because it did\n> not work. There is also no support in the executor to handle multiple\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n> OR values when using indexes. I have fixed the optimizer so it can now\n> identify OR clauses and handle them properly:\n> \n> test=> explain select * from test where x=3 or x=4;\n> NOTICE: equal: don't know whether nodes of type 200 are equal\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using i_test on test (cost=4.10 size=1 width=4)\n> \n> As you can see, I am getting a NOTICE I have to check into. Also, the\n> executor is only returning the FIRST of the OR conditions, because I\n> have not yet added code to nodeIndexscan.c to handle multiple values.\n> \n> This code is not installed in the main source tree. I will complete my\n> cleanups and tests, and install it. I may need help with\n> nodeIndexscan.c. My idea is to hook up multiple ScanKeys, and to move\n> on to the next one when the first finishes. Perhaps someone (Vadim?)\n> could help as I am a little lost in how to do that. Pointers to similar\n> code would help.\n\nexecnodes.h:\n\n/* ----------------\n * IndexScanState information\n *\n * IndexPtr current index in use\n * NumIndices number of indices in this scan\n * ScanKeys Skey structures to scan index rels\n * NumScanKeys array of no of keys in each Skey struct\n\n- some support is already in Executor!\nFunctions in nodeIndexscan.c also handle this.\n\nCurrently, IndexPtr is ALWAYS ZERO - so you have to add code to \nswitch to the next index after NULL is returned by index_getnext()\n(in IndexNext()).\n\nNote that different indices (of the same table) may be used \nin single scan (x = 3 or y = 1)!\n\nThe most complex stuff to be implemented for something\nlike (x = 3 or y = 1) is to check that for tuples, fetched\nby second index sub-scan, x IS NOT EQUAL 3!\nMaybe IndexScan->indxqual can help you...\n\nVadim\n",
"msg_date": "Tue, 28 Jul 1998 00:04:56 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] OR clause issues"
}
] |
[
{
"msg_contents": "I have posted to the general news group but need a more definitive response.\n\nI am running Postgresql 6.1 on SunOS 5.5.1. We are presently looking into \nupgrading to Postgresql 6.3.2, but are having a problem with pg_dump core\ndumping. It looks like the output is OK, but I would like to know for sure.\n\nI read that pg_dump 6.2 should be able to dump a 6.1 database but can not\nlocate the 6.2 source to try it.\n\nI have tried dumping my database with the pg_dumpall provided from the 6.3.2 \ntar file,but this also core dumps.\n\nI have tried dumping the 6.1 database with 6.3.2 pg_dump, but get a user\nauthentication error.\n\nI decided to debug the 6.1 pg_dump source and determined that the core dump \noccured dumping tblinfo cleanup. Specifically when structures allocated for \nsequence tables are being cleaned up. I made the following changes in the \ngetTables function.\n\n for (i=0;i<ntups;i++) {\n tblinfo[i].oid = strdup(PQgetvalue(res,i,i_oid));\n tblinfo[i].relname = strdup(PQgetvalue(res,i,i_relname));\n tblinfo[i].relarch = strdup(PQgetvalue(res,i,i_relarch));\n tblinfo[i].relacl = strdup(PQgetvalue(res,i,i_relacl));\n tblinfo[i].sequence = (strcmp (PQgetvalue(res,i,i_relkind), \"S\") == 0);\n\n /* Local fix - needs to be initialized to zero for sequence tables.\n */\n \n tblinfo[i].numatts = 0;\n tblinfo[i].attlen = 0;\n tblinfo[i].attlen = NULL;\n tblinfo[i].inhAttrs = NULL;\n tblinfo[i].attnames = NULL;\n tblinfo[i].typnames = NULL;\n \n }\n\n\nI would like to know if my changes are appropriate, and how can I get the \ncorrect patch if one is available. Any input would be appreciated. Thanks\nin advance.\n\nChris Bower\nSoftware Engineer\nEastman Kodak\n",
"msg_date": "Mon, 27 Jul 1998 11:31:36 -0400",
"msg_from": "[email protected] (J Christopher Bower)",
"msg_from_op": true,
"msg_subject": "6.1 pg_dump core dump"
},
{
"msg_contents": "[email protected] (J Christopher Bower) writes:\n> I decided to debug the 6.1 pg_dump source and determined that the core dump \n> occured dumping tblinfo cleanup. Specifically when structures allocated for \n> sequence tables are being cleaned up.\n\nIt looks like this bug has been fixed in a different way in the current\npg_dump sources (clearTableInfo now knows that sequences don't have the\nstandard attributes). It also looks like pg_dump has changed enough\nsince 6.1 that any patches wouldn't be easily transferred back and\nforth anyway.\n\nYou might be able to run the current pg_dump against your 6.1 database\nby recompiling the current pg_dump.c/.h/common.c atop the 6.1 libpq.\nThat should cure the protocol incompatibility. However, pg_dump is\nfriendly enough with the system table layouts that I fear it might not\nwork with an old database anyway.\n\nProbably your best bet is just to go ahead and use your patched pg_dump\nto extract data from your old database.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Jul 1998 10:42:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.1 pg_dump core dump "
}
] |
[
{
"msg_contents": "I'm trying to build the ODBC driver to use with an iODBC interface for\nthe upcoming release of ApplixWare on Linux. I notice that in the last\nfew days the psqlodbc distribution has acquired a Makefile and a\nREADME.Linux, but the build is not going very well.\n\nHas anyone tried to build on a Unix box recently? If so, how?? The first\nfile fails on a WINAPI-ed typedef...\n\n - Tom\n\ngolem$ make\ngcc -g -c -Wall -O -fPIC -I. -I.. -I -I -g -DHAVE_CONFIG_H -c\ninfo.c -o info.o\nIn file included from info.c:41:\nconnection.h:166: parse error before `*'\nconnection.h:177: parse error before `*'\nconnection.h:202: parse error before `HINSTANCE'\nconnection.h:202: warning: no semicolon at end of struct or union\nconnection.h:203: warning: data definition has no type or storage class\nconnection.h:204: parse error before `DriverToDataSource'\nconnection.h:204: warning: data definition has no type or storage class\nconnection.h:207: parse error before `}'\ninfo.c: In function `SQLGetInfo':\ninfo.c:190: dereferencing pointer to incomplete type\ninfo.c:197: dereferencing pointer to incomplete type\ninfo.c:302: dereferencing pointer to incomplete type\ninfo.c:303: dereferencing pointer to incomplete type\ninfo.c:622: dereferencing pointer to incomplete type\ninfo.c:717: dereferencing pointer to incomplete type\ninfo.c:724: dereferencing pointer to incomplete type\ninfo.c:725: dereferencing pointer to incomplete type\ninfo.c:62: warning: `p' might be used uninitialized in this function\ninfo.c: In function `SQLGetTypeInfo':\ninfo.c:746: warning: left-hand operand of comma expression has no effect\ninfo.c:746: warning: statement with no effect\ninfo.c: In function `SQLTables':\ninfo.c:1007: warning: left-hand operand of comma expression has no\neffect\ninfo.c:1007: warning: statement with no effect\ninfo.c:1017: dereferencing pointer to incomplete type\ninfo.c:1180: warning: left-hand operand of comma expression has no\neffect\ninfo.c:1180: warning: left-hand operand of comma expression has no\neffect\ninfo.c:1180: warning: left-hand operand of comma expression has no\neffect\ninfo.c:1180: warning: statement with no effect\n",
"msg_date": "Tue, 28 Jul 1998 02:22:13 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "psqlodbc"
},
{
"msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n\n> \n> I'm trying to build the ODBC driver to use with an iODBC interface for\n> the upcoming release of ApplixWare on Linux. I notice that in the last\n> few days the psqlodbc distribution has acquired a Makefile and a\n> README.Linux, but the build is not going very well.\n> \n> Has anyone tried to build on a Unix box recently? If so, how?? The first\n> file fails on a WINAPI-ed typedef...\n> \n\nTry with this patch. Since I doubt anyone needs translation DLLs under\nlinux I simply ifdefed out some code. Though it's posible to use\ndlopen/dlsym/dlclose instead of LoadLibrary/GetProcAddress/FreeLibrary...\n\nAleksey\n\n\ndiff -c psqlodbc.old/connection.c psqlodbc/connection.c\n*** psqlodbc.old/connection.c\tTue Jul 28 13:03:23 1998\n--- psqlodbc/connection.c\tTue Jul 28 12:57:17 1998\n***************\n*** 380,390 ****\n--- 380,392 ----\n \t\t}\n \t}\n \n+ #ifndef UNIX\n \t/*\tCheck for translation dll */\n \tif ( self->translation_handle) {\n \t\tFreeLibrary (self->translation_handle);\n \t\tself->translation_handle = NULL;\n \t}\n+ #endif\n \n \tmylog(\"exit CC_Cleanup\\n\");\n \treturn TRUE;\n***************\n*** 393,399 ****\n int\n CC_set_translation (ConnectionClass *self)\n {\n! \n \tif (self->translation_handle != NULL) {\n \t\tFreeLibrary (self->translation_handle);\n \t\tself->translation_handle = NULL;\n--- 395,401 ----\n int\n CC_set_translation (ConnectionClass *self)\n {\n! #ifndef UNIX\n \tif (self->translation_handle != NULL) {\n \t\tFreeLibrary (self->translation_handle);\n \t\tself->translation_handle = NULL;\n***************\n*** 424,429 ****\n--- 426,432 ----\n \t\tself->errormsg = \"Could not find translation DLL functions.\";\n \t\treturn FALSE;\n \t}\n+ #endif\n \n \treturn TRUE;\n }\ndiff -c psqlodbc.old/connection.h psqlodbc/connection.h\n*** psqlodbc.old/connection.h\tTue Jul 28 13:03:10 1998\n--- psqlodbc/connection.h\tTue Jul 28 12:36:56 1998\n***************\n*** 162,167 ****\n--- 162,174 ----\n \tchar\t\t\tname[MAX_TABLE_LEN+1];\n };\n \n+ #ifdef UNIX\n+ #define WINAPI CALLBACK\n+ #define DLLHANDLE void *\n+ #else\n+ #define DLLHANDLE HINSTANCE\n+ #endif\n+ \n /* Translation DLL entry points */\n typedef BOOL (FAR WINAPI *DataSourceToDriverProc) (UDWORD,\n \t\t\t\t\tSWORD,\n***************\n*** 199,205 ****\n \tint\t\t\t\tntables;\n \tCOL_INFO\t\t**col_info;\n \tlong translation_option;\n! \tHINSTANCE translation_handle;\n \tDataSourceToDriverProc DataSourceToDriver;\n \tDriverToDataSourceProc DriverToDataSource;\n \tchar\t\t\ttransact_status;\t\t/* Is a transaction is currently in progress */\n--- 206,212 ----\n \tint\t\t\t\tntables;\n \tCOL_INFO\t\t**col_info;\n \tlong translation_option;\n! \tDLLHANDLE translation_handle;\n \tDataSourceToDriverProc DataSourceToDriver;\n \tDriverToDataSourceProc DriverToDataSource;\n \tchar\t\t\ttransact_status;\t\t/* Is a transaction is currently in progress */\n\n\n-- \nAleksey Demakov\[email protected]\n",
"msg_date": "28 Jul 1998 13:45:13 +0700",
"msg_from": "Aleksey Demakov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] psqlodbc"
},
{
"msg_contents": "> > I'm trying to build the ODBC driver to use with an iODBC interface \n> > for the upcoming release of ApplixWare on Linux.\n> Try with this patch.\n\nOK, that seems to help (though there is still major ugliness with the\nmacro which disables mylog()).\n\nAnyway, I now have a sharable library, and ApplixWare is running. But I\ndon't see any candidate servers when I try to select a database to open.\nI made a ~/.odbc.ini file, and included entries like:\n\n[Postgres]\nDebug = 0\nCommLog = 1\nDriver = /opt/postgres/current/lib/libpsqlodbc.so\n\nbut see nothing in the ApplixWare dialog box. Does anyone have a working\n.odbc.ini file, perhaps for MySQL? I guess I expected to see \"Postgres\"\nas a candidate database in the ApplixWare dialog box, even if the rest\nof the configuration was screwed up. What else needs to be set up??\n\nTIA\n\n - Tom\n",
"msg_date": "Thu, 30 Jul 1998 05:10:36 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] psqlodbc"
},
{
"msg_contents": "> > Anyway, I now have a sharable library, and ApplixWare is running. \n> > But I don't see any candidate servers...\n> > I made a ~/.odbc.ini file...\n> you need one other entry which corresponds to the Postgres entry...\n> You'll then see a \"Postgres Database\" entry in the appropriate Data\n> dialog box.\n\nOK, I followed this suggestion from the Applix folks and it helped. Here\nis the file which does better:\n\n<start .odbc.ini file>\n[ODBC Data Sources]\nPostgres = Postgres Data\n\n[Postgres]\nDebug = 0\nCommLog = 1\nDriver = /opt/postgres/current/lib/libpsqlodbc.so\n\n<eof>\n\nAn now Applix can see some candidates. I then got an error from Applix\nregarding a missing library:\n\n/opt/applix/axdata/elfodbc: can't load library 'libodbc.so'\n\nwhich was apparently in the wrong directory in the Applix distribution.\nSo, I made a soft link to the normal directory area, and now Applix got\nfurther, asking me for a username and password. However, I got the\nfollowing error after that:\n\n/opt/applix/axdata/elfodbc: can't resolve symbol 'parse_statement'\n\nWhen I run \"nm\" on /opt/postgres/current/lib/libpsqlodbc.so.0.24 I see\nthe following entry:\n\nmythos> nm libpsqlodbc.so.0.24 | grep -i parse_statement\n U parse_statement\n\nwhich indicates an unresolved symbol. So, I found that parse.c was not\nbeing compiled by Makefile.unx (and my derivative) so got that compiled\nand linked and things are much closer to working! :)\n\nOK, so now I get an error message saying:\n\n missing a username, port, or server name\n\nif I use an entry in .odbc.ini which only specifies the driver library\n(as above), and I get a similar error message when I specify an\n.odbc.ini entry which looks like:\n\n[PostgresFull]\nDSN = test\nServer = localhost\nUID = tgl\nPort = 5432\nDriver = /opt/postgres/current/lib/libpsqlodbc.so\n\nIf I explicitly type the server name (as \"localhost\") in the Applix\ndialog box, I get a different error message:\n\naxnet: Cannot launch gateway on server\nnot a tcp service in /etc/services\n\nSo, anyone have any other hints? What information must be in a real\n.odbc.ini file for MySQL to work? I don't have much security turned on\nin Postgres, but do have the TCP/IP option specified on the server.\n\nTried adding an entry in /etc/services, but that alone didn't change the\nerror message. Anyone have more hints?? :)\n\n - Tom\n\nOh, btw I have started fixing up a makefile which actually fits into the\nPostgres distribution...\n",
"msg_date": "Fri, 31 Jul 1998 02:44:56 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] psqlodbc"
}
] |
[
{
"msg_contents": "I've just noticed that libpq doesn't cope very gracefully if the backend\nexits when not in the middle of a query (ie, because the postmaster told\nit to quit after some other BE crashed). The behavior in psql, for\nexample, is that the next time you issue a query, psql just exits\nwithout printing anything at all. This is Not Friendly, especially\nconsidering that the BE sent a nice little notice message before it quit.\n\nThe main problem is that if the next thing you do is to send a new query,\nsend() sees that the connection has been closed and generates a SIGPIPE\nsignal. By default that terminates the frontend process.\n\nWe could cure this by having libpq disable SIGPIPE, but we would have\nto disable it before each send() and re-enable afterwards to avoid\naffecting the behavior of the rest of the frontend application.\nTwo additional kernel calls per query sounds like a lot of overhead.\n(We do actually do this when trying to close the connection, but not\nduring normal queries.)\n\nPerhaps a better answer is to have PQsendQuery check for fresh input\nfrom the backend before trying to send the query. This would have two\nside effects:\n 1. If a NOTICE message has arrived, we could print it.\n 2. If EOF is detected, we will reset the connection state to\n CONNECTION_BAD, which PQsendQuery can use to avoid trying to send.\n\nThe minimum cost to do this is one kernel call (a select(), which\nunfortunately is probably a fairly expensive call) in the normal\ncase where no new input has arrived. Another objection is that it's\nnot 100% bulletproof --- if the backend closes the connection in the\nwindow between select() and send() then you can still get SIGPIPE'd.\nThe odds of this seem pretty small however.\n\nI'm inclined to go with answer #2, because it seems to have less\nof a performance impact, and it will ensure that the backend's polite\n\"The Postmaster has informed me that some other backend died abnormally\nand possibly corrupted shared memory.\" message gets displayed. With\napproach #1 we'd still have to go through some pushups to get the\nnotice to come out.\n\nDoes anyone have an objection, or a better idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Jul 1998 13:23:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Coping with backend crash in libpq"
},
{
"msg_contents": "On Tue, Jul 28, 1998 at 01:23:35PM -0400, Tom Lane wrote:\n> I've just noticed that libpq doesn't cope very gracefully if the backend\n> exits when not in the middle of a query (ie, because the postmaster told\n> it to quit after some other BE crashed). The behavior in psql, for\n> example, is that the next time you issue a query, psql just exits\n> without printing anything at all. This is Not Friendly, especially\n> considering that the BE sent a nice little notice message before it quit.\n> \n> The main problem is that if the next thing you do is to send a new query,\n> send() sees that the connection has been closed and generates a SIGPIPE\n> signal. By default that terminates the frontend process.\n> \n> We could cure this by having libpq disable SIGPIPE, but we would have\n> to disable it before each send() and re-enable afterwards to avoid\n> affecting the behavior of the rest of the frontend application.\n> Two additional kernel calls per query sounds like a lot of overhead.\n> (We do actually do this when trying to close the connection, but not\n> during normal queries.)\n> \n> Perhaps a better answer is to have PQsendQuery check for fresh input\n> from the backend before trying to send the query. This would have two\n> side effects:\n> 1. If a NOTICE message has arrived, we could print it.\n> 2. If EOF is detected, we will reset the connection state to\n> CONNECTION_BAD, which PQsendQuery can use to avoid trying to send.\n> \n> The minimum cost to do this is one kernel call (a select(), which\n> unfortunately is probably a fairly expensive call) in the normal\n> case where no new input has arrived. Another objection is that it's\n> not 100% bulletproof --- if the backend closes the connection in the\n> window between select() and send() then you can still get SIGPIPE'd.\n> The odds of this seem pretty small however.\n> \n> I'm inclined to go with answer #2, because it seems to have less\n> of a performance impact, and it will ensure that the backend's polite\n> \"The Postmaster has informed me that some other backend died abnormally\n> and possibly corrupted shared memory.\" message gets displayed. With\n> approach #1 we'd still have to go through some pushups to get the\n> notice to come out.\n> \n> Does anyone have an objection, or a better idea?\n> \n> \t\t\tregards, tom lane\n> \n\nNot really.\n\nI've noticed this kind of problem where the backend will fault in some way, \nand after it does so, the library gets \"confused\".\n\nWe have a couple of processes here that are NEVER supposed to exit. They\nopen a connection for each transaction, and close it at the end. If\nsomething happens to the backend where it dies abnormally, these processes\nwill sometimes get into an odd state in the libpq library where all new\nconnection attempts fail immediately.\n\nI've yet to find a foolproof coding way around this particular problem.\n\n--\n-- \nKarl Denninger ([email protected])| MCSNet - Serving Chicagoland and Wisconsin\nhttp://www.mcs.net/ | T1's from $600 monthly / All Lines K56Flex/DOV\n\t\t\t | NEW! Corporate ISDN Prices dropped by up to 50%!\nVoice: [+1 312 803-MCS1 x219]| EXCLUSIVE NEW FEATURE ON ALL PERSONAL ACCOUNTS\nFax: [+1 312 803-4929] | *SPAMBLOCK* Technology now included at no cost\n",
"msg_date": "Tue, 28 Jul 1998 12:44:59 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Coping with backend crash in libpq"
},
{
"msg_contents": "> I've just noticed that libpq doesn't cope very gracefully if the backend\n> exits when not in the middle of a query (ie, because the postmaster told\n> it to quit after some other BE crashed). The behavior in psql, for\n> example, is that the next time you issue a query, psql just exits\n> without printing anything at all. This is Not Friendly, especially\n> considering that the BE sent a nice little notice message before it quit.\n\nI say, install the signal handler for SIGPIPE on connection startup, but\nwhen you install it, it returns the previous defined action. If we find\nthere was a previous defined action, we can re-install theirs, and let\nit handle the sigpipe. If an application later defines it's own\nsigpipe, over-riding ours, then they get no error message.\n\nHowever, I see psql setting the SIGPIPE handler all over the place, so I\ndon't think that will work there. How about SIGURG? Oops, not portable\nfor unix domain sockets. Can we send a signal to the process, telling\nit the backend has exited. We have that information now, so why not use\nit. Define a signal handler for SIGURG or SIGUSR1, and have that print\nout a message. If the app redefines that, it will get confused when we\nsend the signal from the postmaster. Oops, we can't send signals to the\nclient because they may be owned by other users.\n\nI am stumped. Let me think about it.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Wed, 29 Jul 1998 00:59:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Coping with backend crash in libpq"
},
{
"msg_contents": "> > I've just noticed that libpq doesn't cope very gracefully if the backend\n> > exits when not in the middle of a query (ie, because the postmaster told\n> > it to quit after some other BE crashed). The behavior in psql, for\n> > example, is that the next time you issue a query, psql just exits\n> > without printing anything at all. This is Not Friendly, especially\n> > considering that the BE sent a nice little notice message before it quit.\n> \n> I say, install the signal handler for SIGPIPE on connection startup, but\n> when you install it, it returns the previous defined action. If we find\n> there was a previous defined action, we can re-install theirs, and let\n> it handle the sigpipe. If an application later defines it's own\n> sigpipe, over-riding ours, then they get no error message.\n> \n> However, I see psql setting the SIGPIPE handler all over the place, so I\n> don't think that will work there. How about SIGURG? Oops, not portable\n> for unix domain sockets. Can we send a signal to the process, telling\n> it the backend has exited. We have that information now, so why not use\n> it. Define a signal handler for SIGURG or SIGUSR1, and have that print\n> out a message. If the app redefines that, it will get confused when we\n> send the signal from the postmaster. Oops, we can't send signals to the\n> client because they may be owned by other users.\n> \n> I am stumped. Let me think about it.\n\nHmmm, perhaps fix psql so that it uses SIGPIPE more sensibly. SIGPIPE really\nis the right signal to catch here. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Tue, 28 Jul 1998 22:16:02 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Coping with backend crash in libpq"
},
{
"msg_contents": ">> I say, install the signal handler for SIGPIPE on connection startup, but\n>> when you install it, it returns the previous defined action. If we find\n>> there was a previous defined action, we can re-install theirs, and let\n>> it handle the sigpipe. If an application later defines it's own\n>> sigpipe, over-riding ours, then they get no error message.\n\nThis makes our correct functioning dependent on the application's\nSIGPIPE handler, which doesn't strike me as a good solution.\nAnother problem is that if we leave a SIGPIPE handler in place,\nit will get called for SIGPIPEs on *other* pipes that the surrounding\napplication may have open, and we have no way to know what the right\nresponse is. (AFAIK a SIGPIPE handler can't even portably tell which\nconnection has SIGPIPEd.)\n\n>> Can we send a signal to the process, telling\n>> it the backend has exited.\n\nNo. The client isn't necessarily even on the same machine as the\npostmaster/backend. Even if it were, I don't think we can take over\na signal code that the frontend application might be using for something\nelse.\n\n> Hmmm, perhaps fix psql so that it uses SIGPIPE more sensibly. SIGPIPE really\n> is the right signal to catch here. \n\nWell, psql is also using SIGPIPE sensibly: it's trying to prevent a\nhangup when sending data down a pipe to a subprocess that might\nterminate early. The real problem here is that SIGPIPE is designed\nwrong. It ought to be possible to enable/disable SIGPIPE on a per-\nfile-handle basis ... but AFAIK that's not possible, and it's certainly\nnot portable even if some Unixes support it.\n\n\nI'm still in favor of the check-for-input-just-before-send solution.\nThat does leave a small window where we can fail, but really the failure\nshould be pretty improbable: you have to assume that some other backend\ncoredumps while yours is idle, and in a window of microseconds right\nbefore you are going to send a new command to your backend. I think the\nmess-with-catching-SIGPIPE approach is actually more likely to have\nproblems in practice. It could interfere with normal functioning of\nthe frontend app, whereas any possible failure of the other way requires\na previous failure in some backend. Production backends shouldn't\ncoredump too darn often, one hopes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Jul 1998 11:02:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Coping with backend crash in libpq "
}
] |
[
{
"msg_contents": "subscribe\n\n",
"msg_date": "Wed, 29 Jul 1998 16:30:44 +0200",
"msg_from": "\"Robert Nosko\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hi David. I see that Informix now has a no-cost developers version of a\ndatabase available for Linux (Informix-SE; sounds sort of light-weight).\nI am planning on installing it to see what it can do (if it will work\nwith my RH4.2 system; it claims to work on Caldera and SuSE only).\nAnyway, what do you know about it? Would it be a good system to\nrecommend for those (for one reason or another) moving on from Postgres?\nJust curious...\n\n - Tom\n",
"msg_date": "Wed, 29 Jul 1998 14:54:50 +0000",
"msg_from": "\"Thomas G. Lockhart\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Informix on Linux"
},
{
"msg_contents": "> \n> Hi David. I see that Informix now has a no-cost developers version of a\n> database available for Linux (Informix-SE; sounds sort of light-weight).\n> I am planning on installing it to see what it can do (if it will work\n> with my RH4.2 system; it claims to work on Caldera and SuSE only).\n> Anyway, what do you know about it? Would it be a good system to\n> recommend for those (for one reason or another) moving on from Postgres?\n> Just curious...\n> \n> - Tom\n> \n\nPretty cool huh? I have been telling anyone who would listen that we needed\nto be on Linux for over two years. Don't know if that had any effect at all,\nor if Informix just woke up and smelled the coffee. I wish it had been UDO,\nbut SE is supposed to be very nice for what it is, so this is a good start.\n\nAnyhow, It should be fine on RH 4.2. Redhat was not mentioned because we\nhad some glibc issues discovered too late in the release process.\n\nAccording to claims on comp.databases.informix, everything works on a glibc\nsystem (assuming libc5 exists) except compiling and linking ESQL/C programs.\nThis can even be made to work if you tell gcc that is to use only libc5.\nI have not tried it myself.\n\nAs an upgrade to postgres? Depends on what you are trying to use it for.\nIf you need objects and types and server side functions, postgres is a\ngood choice (only choice at this moment).\n\nI am not very familiar with SE myself as I work on Illustra and IUS/UDO, so\ntake this with a grain of salt, but I would look to SE if I needed more\nperformance for straight up SQL. It is also probably a quite bit more stable\nthan postgres. And it should handle concurrent access/update to tables a\nlot better. But, see the brochure at www.informix.com to get the details.\n\nOr even better, download it and try it out ;-)\n\n-dg\n\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - If simplicity worked, the world would be overrun with insects. -\n",
"msg_date": "Wed, 29 Jul 1998 09:32:31 -0700 (PDT)",
"msg_from": "[email protected] (David Gould)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Informix on Linux"
}
] |
[
{
"msg_contents": "\nThomas wrote:\n\n>Hi David. I see that Informix now has a no-cost developers version of a\n>database available for Linux (Informix-SE; sounds sort of light-weight).\n>I am planning on installing it to see what it can do (if it will work\n>with my RH4.2 system; it claims to work on Caldera and SuSE only).\n>Anyway, what do you know about it? Would it be a good system to\n>recommend for those (for one reason or another) moving on from Postgres?\n>Just curious...\n\nSE is actually the light weight DB Server from Informix. It stores it's tables and indexes\nin seperate files per table/index like we do. Every connection gets it's own process on the Server.\nIt lacks the monitoring interface that makes life so much easier on the Dynamic Server.\nThe really big winner for those moving on from PostgreSQL would be \nInformix Dynamic Server with the Universial Data Option. This server's technology is\nactually based on postgres, and has most of our functionality. \nMichael Stonebraker is at Informix driving this side of the Server. I expect IDS/UD on Linux\nto be the next Announcement Informix makes, and yes, it is highly recommendable.\n(I guess we better ask David on this one :-)\n\nBesides PostgreSQL, Informix is my favorite ;-)\nAndreas\n\n",
"msg_date": "Wed, 29 Jul 1998 17:14:32 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Informix on Linux"
}
] |
[
{
"msg_contents": "[email protected] writes:\n> The command\n> cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n> returns \n> Fatal error, aborting.\n> : no such user\n\nYeah, the cvs server at postgresql.org has been broken for a couple days\nnow. I'm seeing the same and some other people have complained as well.\n\nMarc, are you awake?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Jul 1998 11:58:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "Dear PostgreSQL gurus,\n\nfirst of all, many thanks to everyone involved in writing and\nenhancing PostgreSQL! It has grown enourmously during the last\nmonths and is a most impressive system. \n\nSorry if this has been dealt with, but in the last few days,\nI have not been able to access to PostgreSQL anonymous cvs server. \n\nThe command\n\n cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n\nreturns \n\n Fatal error, aborting.\n : no such user\n\n(it's called from a script and has worked flawlessly for quite a while\nbefore; I have not noted any change in the description of CVS access\nat the WWW site...). Repeating the cvs login did not change the result\nof the checkout call...\n\nI'd very much appreciate any hint on what error I might make ...\n\nBest regards,\n\nErnst\n",
"msg_date": "Wed, 29 Jul 1998 16:59:56 GMT",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Problem with CVS access to current sources"
},
{
"msg_contents": "On Wed, 29 Jul 1998, Tom Lane wrote:\n\n> [email protected] writes:\n> > The command\n> > cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n> > returns \n> > Fatal error, aborting.\n> > : no such user\n> \n> Yeah, the cvs server at postgresql.org has been broken for a couple days\n> now. I'm seeing the same and some other people have complained as well.\n> \n> Marc, are you awake?\n\nTry it now...I just tried it using anon-cvs from home, and it appears to\nwork...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 1 Aug 1998 12:28:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Try it now...I just tried it using anon-cvs from home, and it appears to\n> work...\n\nYup, anon cvs is working for me again. Thanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Aug 1998 15:58:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "Hmm,\n\nI tried (never had cvs access):\n\nmira:~/cvs$ export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\nmira:~/cvs$ cvs login\n(Logging in to [email protected])\nCVS password: \ncvs [login aborted]: incorrect password\nmira:~/cvs$ \n\n\tRegards,\n\n\t\tOleg\n\nOn Sat, 1 Aug 1998, Tom Lane wrote:\n\n> Date: Sat, 01 Aug 1998 15:58:04 -0400\n> From: Tom Lane <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Problem with CVS access to current sources \n> \n> The Hermit Hacker <[email protected]> writes:\n> > Try it now...I just tried it using anon-cvs from home, and it appears to\n> > work...\n> \n> Yup, anon cvs is working for me again. Thanks.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 2 Aug 1998 01:46:48 +0400 (MSK DST)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "On Sat, 1 Aug 1998, The Hermit Hacker wrote:\n\n> On Wed, 29 Jul 1998, Tom Lane wrote:\n> \n> > [email protected] writes:\n> > > The command\n> > > cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n> > > returns \n> > > Fatal error, aborting.\n> > > : no such user\n> > \n> > Yeah, the cvs server at postgresql.org has been broken for a couple days\n> > now. I'm seeing the same and some other people have complained as well.\n> > \n> > Marc, are you awake?\n> \n> Try it now...I just tried it using anon-cvs from home, and it appears to\n> work...\n\nWell, CVS is now working here now, but I'm still getting:\n\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\n\nwhen running initdb. Heres some more details (from --debug):\n\n> creating bootstrap relation\nbootstrap relation created ok\n> Commit End\ntuple 1242<Inserting value: 'boolin'\nTyp == NULL, typeindex = 3 idx = 0\nboolin End InsertValue\nInserting value: '11'\nTyp == NULL, typeindex = 10 idx = 1\n11 End InsertValue\nInserting value: 'f'\nTyp == NULL, typeindex = 10 idx = 2\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\nERROR: pg_atoi: error in \"f\": can't parse \"f\"\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/dbase/data/base/template1\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n\n",
"msg_date": "Sat, 1 Aug 1998 23:41:01 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "> On Sat, 1 Aug 1998, The Hermit Hacker wrote:\n> \n> > On Wed, 29 Jul 1998, Tom Lane wrote:\n> > \n> > > [email protected] writes:\n> > > > The command\n> > > > cvs -z3 -d :pserver:[email protected]:/usr/local/cvsroot co -P pgsql\n> > > > returns \n> > > > Fatal error, aborting.\n> > > > : no such user\n> > > \n> > > Yeah, the cvs server at postgresql.org has been broken for a couple days\n> > > now. I'm seeing the same and some other people have complained as well.\n> > > \n> > > Marc, are you awake?\n> > \n> > Try it now...I just tried it using anon-cvs from home, and it appears to\n> > work...\n> \n> Well, CVS is now working here now, but I'm still getting:\n> \n> ERROR: pg_atoi: error in \"f\": can't parse \"f\"\n> ERROR: pg_atoi: error in \"f\": can't parse \"f\"\n> \n> when running initdb. Heres some more details (from --debug):\n> \n\nIt is because there are two copies of initdb in initdb.sh. Someone\nreported the problem, but no one fixed it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 1 Aug 1998 18:56:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
},
{
"msg_contents": "> \n> It is because there are two copies of initdb in initdb.sh. Someone\n> reported the problem, but no one fixed it.\n> \n\nI have fixed this in the current CVS tree.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n",
"msg_date": "Sat, 1 Aug 1998 19:11:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
},
{
"msg_contents": "On Sat, 1 Aug 1998, Bruce Momjian wrote:\n\n> It is because there are two copies of initdb in initdb.sh. Someone\n> reported the problem, but no one fixed it.\n\n\tFixed...:(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 1 Aug 1998 21:15:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
},
{
"msg_contents": "On Sat, 1 Aug 1998, Bruce Momjian wrote:\n\n> > \n> > It is because there are two copies of initdb in initdb.sh. Someone\n> > reported the problem, but no one fixed it.\n> > \n> \n> I have fixed this in the current CVS tree.\n\n\tOops...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 1 Aug 1998 21:16:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
},
{
"msg_contents": "On Sun, 2 Aug 1998, Oleg Bartunov wrote:\n\n> Hmm,\n> \n> I tried (never had cvs access):\n> \n> mira:~/cvs$ export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot\n> mira:~/cvs$ cvs login\n> (Logging in to [email protected])\n> CVS password: \n> cvs [login aborted]: incorrect password\n\n\tWhat did you use for password?\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 1 Aug 1998 21:16:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources "
},
{
"msg_contents": "On Sat, 1 Aug 1998, Bruce Momjian wrote:\n\n> > \n> > It is because there are two copies of initdb in initdb.sh. Someone\n> > reported the problem, but no one fixed it.\n> > \n> \n> I have fixed this in the current CVS tree.\n\nThanks.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n\n",
"msg_date": "Sun, 2 Aug 1998 10:26:11 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
},
{
"msg_contents": "Hi Marc, \n\nthank you very much, CVS works like a charm again...\n\nBest regards,\n\nErnst\n",
"msg_date": "Sun, 2 Aug 1998 09:58:24 GMT",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with CVS access to current sources"
}
] |
[
{
"msg_contents": "Vadim wrote:\n>It's nice to see expected results but I still have some\n>new questions - please help!\n\n>1. CREATE TABLE test (x integer, y integer)\n>2. INSERT INTO test VALUES (1, 1);\n> INSERT INTO test VALUES (1, 2);\n> INSERT INTO test VALUES (3, 2);\n>3. run two session T1 and T2 \n>4. in session T2 run\n> UPDATE test SET x = 1, y = 2 WHERE x <> 1 OR y <> 2;\n2 rows updated.\n>5. in session T1 run\n> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> UPDATE test SET y = 3 WHERE x = 1;\nblocks\n> --\n> -- 1st record will be changed by T2, qual for new record\n> -- version will be OK, but T1 should be aborted (???)\n> --\n>6. in session T2 run\n> COMMIT;\n>7. in session T1 run\n> ROLLBACK; -- just to be sure -:)\nUPDATE test SET y = 3 WHERE x = 1\n *\nERROR at line 1:\nORA-08177: can't serialize access for this transaction\nSQL> rollback;\n\nRollback complete.\n>8. now in session T2 run\n> UPDATE test SET x = 2;\n3 rows updated.\n>9. in session T1 run\n> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n> UPDATE test SET y = 4 WHERE x = 1 or x = 2;\nblocks\n>11. in session T2 run\n> COMMIT;\nCommit complete.\nin T1: 3 rows updated.\n>12. in session T1 run\n> SELECT * FROM test; -- results?\n> ^^^^^^^^^^^^^^^^^^\n>I would like to be sure that T1 will update table...\n X Y\n---------- ----------\n 2 4\n 2 4\n 2 4\n\nSo it does.\n\nAndreas\n\n\n",
"msg_date": "Thu, 30 Jul 1998 11:32:44 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Q about read committed in Oracle..."
},
{
"msg_contents": "On Thu, Jul 30, 1998 at 11:32:44AM +0200, Andreas Zeugswetter wrote:\n> Vadim wrote:\n> >It's nice to see expected results but I still have some\n> >new questions - please help!\n> ...\n\nIt seems you had the results in the wrong order Andreas.\n\nMichael\n-- \nDr. Michael Meskes\t\[email protected], [email protected]\nGo SF49ers! Go Rhein Fire!\tUse Debian GNU/Linux! \n",
"msg_date": "Thu, 30 Jul 1998 21:42:39 +0200",
"msg_from": "\"Dr. Michael Meskes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Q about read committed in Oracle..."
}
] |
[
{
"msg_contents": "\nThe CVS problem is still hampering me in getting JDBC uptodate. I'm still\ngetting:\n\n[peter@maidast pgsql]$ cvs update\nFatal error, aborting.\n: no such user\n\nMarc, any news when CVS will be available again?\n\nCurrently, I haven't got a working backend here, and I have a queue of\nthings to work on.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n\n",
"msg_date": "Thu, 30 Jul 1998 11:08:58 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS Problem holding up JDBC work"
}
] |
[
{
"msg_contents": "I. First, we need not in long-term page/row locking unlike\nsystems using locks for concurrency/consistency control.\nAll what we need we already have: xmax. When UPDATE/DELETE\nwill like to change row with valid xmax, they will check\nis this transaction running or not. If it's running then\nbackend will wait for xmax commit/abort: we'll add \ntransaction pseudo-table (just some OID) and acquire\nexclusive lock on this table for starting transaction\n(XID will be placed in LOCKTAG->tupleId). Other backends\nwill acquire share lock on xmax XID in this table and\nso will wait for xmax commit/abort (when exclusive lock\nwill be released). \n So, we need in short-term locks only when actually \nread/write rows from/to shared buffers. Each heap Seq/Index \nscan will allocate BLOCKSZ space on the start, heap\naccess methods will copy valid rows there and release\nshare page_or_row lock before return. After that scan\nwill continue execution of all its joins and subqueries.\n Joins and subqueries can take long time and holding\nlocks for long time seems very bad to me: this means\nlock escalation and increases the likelihood of deadlocks.\n\nII. While we need not in long-term page/row locks we have to\nimplement some long-term table locks - mostly to give users\nadditional abilities for concurrency control.\n\nI learnt locking in Oracle and it seems quite appropriate for us.\n\nOracle _table_ locking modes (in short):\n\n1. Row Share Table Locks - acquired by\n\n SELECT ... FOR UPDATE (we can just update xmax in selected tuples...)\n LOCK TABLE table IN ROW SHARE MODE;\n\n Conflicts with 5.\n (UPDATE/DELETE/SELECT_FOR_UPDATE will conflict on the same rows).\n\n2. Row Exclusive Table Locks - acquired by\n\n UPDATE, INSERT, DELETE\n LOCK TABLE table IN ROW EXCLUSIVE MODE;\n\n Conflicts with 3, 4, 5.\n (UPDATE/DELETE/SELECT_FOR_UPDATE will conflict on the same rows).\n\n3. Share Table Locks - acquired by\n\n LOCK TABLE table IN SHARE MODE;\n\n Conflicts with 2, 4, 5. This mode likes our current READ lock.\n\n4. Share Row Exclusive Table Locks (the most cool mode name -:))\n - acquired by\n\n LOCK TABLE table IN SHARE ROW EXCLUSIVE MODE;\n\n Conflicts with 2, 3, 4, 5. (Exclusive rights to change\n table but allows SELECT_FOR_UPDATE).\n\n5. Exclusive Table Locks - acquired by\n\n LOCK TABLE table IN EXCLUSIVE MODE;\n\n Conflicts with 1, 2, 3, 4, 5. Like our WRITE lock but\n allows reading (ie SELECT, without FOR UPDATE).\n\nThat's all -:)) \nThese are long-term locks acquired for the duration of transaction.\n\nBut I would like to add two internal lock modes due to VACUUM.\n\n6. AccessShareLock - acquired by each DML statement\n (INSERT, UPDATE, DELETE, SELECT)\n for the duration of statement.\n\n7. AccessExclusiveLock - acquired by VACUUM\n\n: we can't vacuum a relation scanned by some other backend...\n(BTW, having these ones we get rid of pg_vlock file...)\n\n\nAnd now yet two another locks for DDL statements \n(DROP/ALTER) for tables and indices only.\n\n8. ObjShareLock - acquired by heap/index open for the duration\n of transaction.\n\n9. ObjExclusiveLock - acquired by DROP/ALTER.\n\n: SELECT doesn't aquire any type of lock except of \n AccessSharedLock but this is short-term lock and would we\n like disallow DROP TABLE that was read by some running\n transaction ?\n\nDDL locks don't conflict with DML locks 'of course.\n\nAnd also -:)) one special lock:\n\n10. ExtendLock - acquired when backend need to extend relation.\n\n-----\n\nComments?\n\nVadim\nP.S. Shouldn't LLL be renamed to Short Term Locking - STL? -:))\n",
"msg_date": "Fri, 31 Jul 1998 02:28:12 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "proposals for LLL, part 2 (locking)"
},
{
"msg_contents": "> But I would like to add two internal lock modes due to VACUUM.\n> \n> 6. AccessShareLock - acquired by each DML statement\n> (INSERT, UPDATE, DELETE, SELECT)\n> for the duration of statement.\n> \n> 7. AccessExclusiveLock - acquired by VACUUM\n> \n> : we can't vacuum a relation scanned by some other backend...\n> (BTW, having these ones we get rid of pg_vlock file...)\n\nOn the other hand, we could use ObjExclusiveLock for vacuum -\nvacuuming relations opened by running transaction is not\nmuch usefull thing for now...\n\n> \n> And now yet two another locks for DDL statements\n> (DROP/ALTER) for tables and indices only.\n> \n> 8. ObjShareLock - acquired by heap/index open for the duration\n> of transaction.\n> \n> 9. ObjExclusiveLock - acquired by DROP/ALTER.\n\nVadim\n",
"msg_date": "Fri, 31 Jul 1998 10:46:30 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] proposals for LLL, part 2 (locking)"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.