threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "When is 6.6 being released? I'm not sure about the greedy lexer, I don't\nreally know enough to comment, but at first glance, yes fine. The question\nis, though, what are possible operators. Do we limit the user-defined\noperators in PG to only to a specific subset of characters. Perhaps we\nshould lex each operator separately, and then get the compiler to construct\nlogical operators from the physical components that it gets.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Thomas Lockhart [mailto:[email protected]]\n>> Sent: Monday, September 13, 1999 5:33 AM\n>> To: Leon\n>> Cc: Tom Lane; [email protected]\n>> Subject: Re: [HACKERS] Status report: long-query-string changes\n>> \n>> \n>> > Thomas Lockhart should speak up - he seems the only person who\n>> > has objections yet. If the proposed thing is to be \n>> declined, something\n>> > has to be applied instead in respect to lexer reject feature and\n>> > accompanying size limits, as well as grammar inconsistency.\n>> \n>> Hmm. I'd suggest that we go with the \"greedy lexer\" solution, which\n>> continues to gobble characters which *could* be an operator until\n>> other characters or whitespace are encountered.\n>> \n>> I don't recall any compelling cases for which this would be an\n>> inadequate solution, and we have plenty of time until v6.6 \n>> is released\n>> to discover problems and work out alternatives.\n>> \n>> Sorry for slowing things up; but fwiw I *did* think about it \n>> some more\n>> ;)\n>> \n>> - Thomas\n>> \n>> -- \n>> Thomas Lockhart\t\t\t\t\n>> [email protected]\n>> South Pasadena, California\n>> \n>> ************\n>> \n",
"msg_date": "Mon, 13 Sep 1999 10:16:54 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> When is 6.6 being released?\n\nSchedule? You want a schedule???\n\nSeriously, I'd have to guess at least three months off. Vadim wants to\ndo transaction logging, I've got a lot of half-baked optimizer work to\nfinish, and I dunno what anyone else has up their sleeve.\n\nThe goal used to be a major release every three months, but we haven't\nmet that in some time. And, since it seems like we are now putting\nout major releases in order to do significant upgrades and not just\nincremental stability improvements, I kinda think that a slower cycle\n(six-month intervals, say) might be a more useful goal at this stage.\nHas the core group thought about this issue lately?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 10:08:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "> The goal used to be a major release every three months, but we haven't\n> met that in some time. And, since it seems like we are now putting\n> out major releases in order to do significant upgrades and not just\n> incremental stability improvements, I kinda think that a slower cycle\n> (six-month intervals, say) might be a more useful goal at this stage.\n> Has the core group thought about this issue lately?\n\nI got a good laugh on this one. That we actually planned ahead... :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 11:16:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
}
] |
[
{
"msg_contents": "Hi im surfing the net to try and find someone who can help me. A friend of\nmine told me that there are some people who know how to get sites listed at\nthe top of the search engines. I run an adult website am willing to pay\nsomeone who can do this.\n\n\n\n\n",
"msg_date": "Tue, 14 Sep 1999 00:10:37 +1000",
"msg_from": "\"John Henry\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "- WANTED"
}
] |
[
{
"msg_contents": "Can I suggest that defined targets are set up for major releases (if they\naren't already). I don't think that major releases need to happen on a\nregular cycle. That's for patch releases. Having three months or so's\nworth of patches in a point release is useful, but I only want to upgrade\n(as opposed to patch) a production environment when it's going to buy me a\nwell-defined set of new functions, e.g.: MVCC, unlimited row length, etc.,\netc. So if we don't have a major release for twelve or fourteen months, so\nwhat. Besides, for anybody running a production environment, it could take\na couple of months worth of inhouse testing before they can make the move\nanyway. When moving from Oracle 7.3 to 8.0, our system will go through 6-9\nmonths worth of strenuous testing.\n\nAre the releases currently time based, or function based, or a little bit of\nboth?\n\nMikeA\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Monday, September 13, 1999 4:09 PM\n>> To: Ansley, Michael\n>> Cc: [email protected]\n>> Subject: Re: [HACKERS] Status report: long-query-string changes \n>> \n>> \n>> \"Ansley, Michael\" <[email protected]> writes:\n>> > When is 6.6 being released?\n>> \n>> Schedule? You want a schedule???\n>> \n>> Seriously, I'd have to guess at least three months off. \n>> Vadim wants to\n>> do transaction logging, I've got a lot of half-baked \n>> optimizer work to\n>> finish, and I dunno what anyone else has up their sleeve.\n>> \n>> The goal used to be a major release every three months, but \n>> we haven't\n>> met that in some time. And, since it seems like we are now putting\n>> out major releases in order to do significant upgrades and not just\n>> incremental stability improvements, I kinda think that a slower cycle\n>> (six-month intervals, say) might be a more useful goal at this stage.\n>> Has the core group thought about this issue lately?\n>> \n>> \t\t\tregards, tom lane\n>> \n",
"msg_date": "Mon, 13 Sep 1999 17:15:50 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "> Are the releases currently time based, or function based,\n> or a little bit of both?\n\nA bit of both. No new functionality would imply no new full release,\nbut that hasn't been a problem for the last three years ;)\n\nimho, Tom Lane's query optimizer project and my in-progress join\nsyntax work would be enough to justify a new release after a couple\nmore months (we try to have a one month beta cycle to squash bugs and\nto ensure testing across platforms). Of course, there are and will be\nother new features and internal changes at least as large as the two I\nmentioned which could justify the next release also.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 13 Sep 1999 15:50:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
}
] |
[
{
"msg_contents": ">> > The goal used to be a major release every three months, but we haven't\n>> > met that in some time. And, since it seems like we are now putting\n>> > out major releases in order to do significant upgrades and not just\n>> > incremental stability improvements, I kinda think that a slower cycle\n>> > (six-month intervals, say) might be a more useful goal at this stage.\n>> > Has the core group thought about this issue lately?\n>> \n>> I got a good laugh on this one. That we actually planned ahead... :-)\n\nMaybe the core team should take a look at the TODO list, and split it over\nthe next couple of release cycles. Then you can just say, when a and b and\nc have been achieved, and are stable, then we release 6.x\nThis doesn't include bugs. Bugs must still be fixed and release in the\npoint releases, which should be every, say, three months.\nAs new stuff gets added to the TODO list (not bugs), it gets shuffled into\nthe release cycle. This isn't hard to manage once the first one has been\ndone, and you make a policy of only planning four or so releases ahead.\nThen ou can post this plan on the web site, so that people know what stuff\nis being worked on. Of course, CVS management becomes an issue, because if\nsomeone feels like working on something that is not due for two releases,\ndoes it go into the current source tree? If necessary, you can open up\nbranches for each planned release, and people can check out whichever one\nthey feel like working on. Of course, merging bug fixes becomes more of an\nissue. Actually the more I think about it, the more complex it becomes, but\nif the CVS management is not really an issue, then this is a possible\napproach. Otherwise, we'll have to think of something else.\nOf course, the core team are responsible for merging changes into the CVS\ntree, so they could just implement a policy of only adding new functionality\nthat is required for the current release.\nIf somebody desperately wants something added, and can convince the core\nteam to include it in the current release cycle, then the code can be added\nimmediately (assuming that somebody has actually done). Alternatively, they\nmanage their own source tree, until such time as the code gets included.\n\nMikeA\n",
"msg_date": "Mon, 13 Sep 1999 17:46:48 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> >> > The goal used to be a major release every three months, but we haven't\n> >> > met that in some time. And, since it seems like we are now putting\n> >> > out major releases in order to do significant upgrades and not just\n> >> > incremental stability improvements, I kinda think that a slower cycle\n> >> > (six-month intervals, say) might be a more useful goal at this stage.\n> >> > Has the core group thought about this issue lately?\n> >> \n> >> I got a good laugh on this one. That we actually planned ahead... :-)\n> \n> Maybe the core team should take a look at the TODO list, and split it over\n> the next couple of release cycles. Then you can just say, when a and b and\n> c have been achieved, and are stable, then we release 6.x\n\nI would be nice if we could do that, but people work as they have time\nand interest in certain areas. Also, things get fixed as people find\nproblems and we become more capable with the source code.\n\nHard to plan any of that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 12:30:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "> > Maybe the core team should take a look at the TODO list, and split it over\n> > the next couple of release cycles. Then you can just say, when a and b and\n> > c have been achieved, and are stable, then we release 6.x\n> I would be nice if we could do that, but people work as they have time\n> and interest in certain areas. Also, things get fixed as people find\n> problems and we become more capable with the source code.\n\nI used to be more certain that at least a \"notice of intent\" for\nfuture development would help the project. But very little of the\nspecific improvements and fixes over the last few releases were ones\nwe would have predicted would happen, and we certainly would not have\ngotten the order right.\n\nAs Bruce points out, things just seem to happen. And one can't plan\nfor, for example, Tom Lane getting bit by the bug and becoming a major\ncontributor over the last few months. We've gotten very far (farther\nthan I would have imagined) by letting others come up with ideas for\nimprovements. The core group ain't as smart as you might think ;) \n\nThough it drove me nuts earlier, resisting the temptation to cast into\nconcrete a short- or medium-range plan has been a real plus for the\nproject as a whole. We don't very often reject ideas which pass the\ndiscussion phase, and people know that their ideas will make it into\nthe release cycle asap.\n\notoh, you might consider your suggestion as a \"docs project\", rather\nthan firm planning, and one could put some time into taking Bruce's\nToDo list, sorting it into topics, and writing up a more verbose\ndescription for some of the topic areas. Just an idea...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 02:11:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Though it drove me nuts earlier, resisting the temptation to cast into\n> concrete a short- or medium-range plan has been a real plus for the\n> project as a whole.\n\nThe facts of the matter are that contributors work on the problems that\nthey find interesting, and/or the things that are getting in the way of\ntheir own use of Postgres at the moment. If the core team tried to tell\npeople what to work on, less would get contributed, and that would\nbenefit no one.\n\nWhen 6.5 was released, I tried to stir up a little discussion about the\nmajor things to work on for 6.6, and couldn't even get any consensus\non a plan for *one* revision. So I think a longer-term plan would be\nan exercise in wishful thinking. Things will get done when someone\nsteps up to the plate and does them.\n\n> otoh, you might consider your suggestion as a \"docs project\", rather\n> than firm planning, and one could put some time into taking Bruce's\n> ToDo list, sorting it into topics, and writing up a more verbose\n> description for some of the topic areas. Just an idea...\n\nIndeed, the TODO list is awfully bare-bones; many of the entries don't\nconvey much information to someone who's not already familiar with the\nissue. Something more fleshed-out would be a useful project.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 1999 11:21:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
}
] |
[
{
"msg_contents": "\nHow about nesetd or/and named transactions?\nIs it in plans for nearest future?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 13 Sep 1999 21:43:14 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Nested transactions"
}
] |
[
{
"msg_contents": "Hi all,\n \n I have been working with user defined types and user defined c\nfunctions. One problem that I have encountered with the function\nmanager is that it does not allow the user to define type conversion\nfunctions that convert between user types. For instance if mytype1,\nmytype2, and mytype3 are three Postgresql user types, and if I wish to\ndefine Postgresql conversion functions like\n\nCREATE FUNCTION mytype3 ( mytype2 )\n RETURNS mytype3\n AS 'mytypes.so'\n LANGUAGE 'C'\n\nCREATE FUNCTION mytype3 ( mytype1 )\n RETURNS mytype3\n AS 'mytypes.so'\n LANGUAGE 'C'\n\nI run into problems, because the Postgresql dynamic loader would look\nfor a single link symbol, mytype3, for both pieces of object code. If\nI just change the name of one of the Postgresql functions (to make the\nsymbols distinct), the automatic type conversion that Postgresql uses,\nfor example, when matching operators to arguments no longer finds the\ntype conversion function.\n\nThe solution that I propose, and have implemented in the attatched\npatch extends the CREATE FUNCTION syntax as follows. In the first case\nabove I use the link symbol mytype2_to_mytype3 for the link object\nthat implements the first conversion function, and define the\nPostgresql operator with the following syntax\n\nCREATE FUNCTION mytype3 ( mytype2 )\n RETURNS mytype3\n AS 'mytypes.so', 'mytype2_to_mytype3'\n LANGUAGE 'C'\n\nThe syntax for the AS clause, which was 'AS <link-file>' becomes \n \n\tAS <link_file>[, <link_name>]\n\nSpecification of the link_name is optional, and not needed if the link\nname is the same as the Postgresql function name.\n\nThe patch includes changes to the parser to include the altered\nsyntax, changes to the ProcedureStmt node in nodes/parsenodes.h,\nchanges to commands/define.c to handle the extra information in the AS\nclause, and changes to utils/fmgr/dfmgr.c that alter the way that the\ndynamic loader figures out what link symbol to use. I store the\nstring for the link symbol in the prosrc text attribute of the pg_proc\ntable which is currently unused in rows that reference dynamically\nloaded\nfunctions.\n\n\nBernie Frankpitt",
"msg_date": "Mon, 13 Sep 1999 20:50:08 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patch for user-defined C-language functions"
},
{
"msg_contents": "Bernard Frankpitt <[email protected]> writes:\n> The solution that I propose, and have implemented in the attatched\n> patch extends the CREATE FUNCTION syntax as follows. In the first case\n> above I use the link symbol mytype2_to_mytype3 for the link object\n> that implements the first conversion function, and define the\n> Postgresql operator with the following syntax\n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so', 'mytype2_to_mytype3'\n> LANGUAGE 'C'\n> The syntax for the AS clause, which was 'AS <link-file>' becomes \n> \tAS <link_file>[, <link_name>]\n> Specification of the link_name is optional, and not needed if the link\n> name is the same as the Postgresql function name.\n\n> I store the string for the link symbol in the prosrc text attribute of\n> the pg_proc table which is currently unused in rows that reference\n> dynamically loaded functions.\n\nSounds like a good plan to me. I'll be glad to check this over and\ncommit it into 6.6 (unless there are objections?) ... but could I\ntrouble you for documentation diffs as well? At the very least,\nthe text discussion of CREATE FUNCTION, the reference page entry,\nand the online help in psql need to reflect this addition.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 20:19:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions "
},
{
"msg_contents": "\nTom, where are we on this?\n\n> Bernard Frankpitt <[email protected]> writes:\n> > The solution that I propose, and have implemented in the attatched\n> > patch extends the CREATE FUNCTION syntax as follows. In the first case\n> > above I use the link symbol mytype2_to_mytype3 for the link object\n> > that implements the first conversion function, and define the\n> > Postgresql operator with the following syntax\n> > CREATE FUNCTION mytype3 ( mytype2 )\n> > RETURNS mytype3\n> > AS 'mytypes.so', 'mytype2_to_mytype3'\n> > LANGUAGE 'C'\n> > The syntax for the AS clause, which was 'AS <link-file>' becomes \n> > \tAS <link_file>[, <link_name>]\n> > Specification of the link_name is optional, and not needed if the link\n> > name is the same as the Postgresql function name.\n> \n> > I store the string for the link symbol in the prosrc text attribute of\n> > the pg_proc table which is currently unused in rows that reference\n> > dynamically loaded functions.\n> \n> Sounds like a good plan to me. I'll be glad to check this over and\n> commit it into 6.6 (unless there are objections?) ... but could I\n> trouble you for documentation diffs as well? At the very least,\n> the text discussion of CREATE FUNCTION, the reference page entry,\n> and the online help in psql need to reflect this addition.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:11:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions"
},
{
"msg_contents": "\nApplied.\n\n\n> Hi all,\n> \n> I have been working with user defined types and user defined c\n> functions. One problem that I have encountered with the function\n> manager is that it does not allow the user to define type conversion\n> functions that convert between user types. For instance if mytype1,\n> mytype2, and mytype3 are three Postgresql user types, and if I wish to\n> define Postgresql conversion functions like\n> \n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so'\n> LANGUAGE 'C'\n> \n> CREATE FUNCTION mytype3 ( mytype1 )\n> RETURNS mytype3\n> AS 'mytypes.so'\n> LANGUAGE 'C'\n> \n> I run into problems, because the Postgresql dynamic loader would look\n> for a single link symbol, mytype3, for both pieces of object code. If\n> I just change the name of one of the Postgresql functions (to make the\n> symbols distinct), the automatic type conversion that Postgresql uses,\n> for example, when matching operators to arguments no longer finds the\n> type conversion function.\n> \n> The solution that I propose, and have implemented in the attatched\n> patch extends the CREATE FUNCTION syntax as follows. In the first case\n> above I use the link symbol mytype2_to_mytype3 for the link object\n> that implements the first conversion function, and define the\n> Postgresql operator with the following syntax\n> \n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so', 'mytype2_to_mytype3'\n> LANGUAGE 'C'\n> \n> The syntax for the AS clause, which was 'AS <link-file>' becomes \n> \n> \tAS <link_file>[, <link_name>]\n> \n> Specification of the link_name is optional, and not needed if the link\n> name is the same as the Postgresql function name.\n> \n> The patch includes changes to the parser to include the altered\n> syntax, changes to the ProcedureStmt node in nodes/parsenodes.h,\n> changes to commands/define.c to handle the extra information in the AS\n> clause, and changes to utils/fmgr/dfmgr.c that alter the way that the\n> dynamic loader figures out what link symbol to use. I store the\n> string for the link symbol in the prosrc text attribute of the pg_proc\n> table which is currently unused in rows that reference dynamically\n> loaded\n> functions.\n> \n> \n> Bernie Frankpitt\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:32:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, where are we on this?\n\nIt's in my to-do queue. I have to touch the same files anyway in order\nto make the world safe for constant-folding, because right now CREATE\nFUNCTION only lets you specify the ISCACHABLE flag for a C-language\nroutine. Need to open that up for all languages, and document it.\n\nI was waiting for Thomas, because I had understood some of his remarks\nto mean that he was busy doing a major rearrangement of code in the\nparser, but he told me last night to go ahead and commit these changes.\nSo it should get done shortly.\n\n\t\t\tregards, tom lane\n\n\n>> Bernard Frankpitt <[email protected]> writes:\n>>>> The solution that I propose, and have implemented in the attatched\n>>>> patch extends the CREATE FUNCTION syntax as follows.\n",
"msg_date": "Tue, 28 Sep 1999 09:45:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions "
}
] |
[
{
"msg_contents": "I am trying to make a table with a class called \"isolation\". For some\nreason, I am getting a parser error:\n\n=> create table cell ( isolation text );\nERROR: parser: parse error at or near \"isolation\"\n\nIf I just take off the \"n\", I get:\n\n=> create table cell ( isolatio text );\nCREATE\n\nThis table had no problems previously; has the word isolation been used\nsomewhere else as a SQL word? I can't think of why else I am having\nproblems with the table (the syntax appears to be correct).\n\nThanks.\n-Tony\n\n",
"msg_date": "Mon, 13 Sep 1999 14:36:48 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is \"isolation\" a restricted word?"
},
{
"msg_contents": "On Mon, Sep 13, 1999 at 02:36:48PM -0700, G. Anthony Reina wrote:\n> I am trying to make a table with a class called \"isolation\". For some\n> reason, I am getting a parser error:\n> \n> => create table cell ( isolation text );\n> ERROR: parser: parse error at or near \"isolation\"\n> \n> If I just take off the \"n\", I get:\n> \n> => create table cell ( isolatio text );\n> CREATE\n> \n> This table had no problems previously; has the word isolation been used\n> somewhere else as a SQL word? I can't think of why else I am having\n> problems with the table (the syntax appears to be correct).\n\nYup - here it is in pgsql/src/backend/parser/keywords.c:\n\n...\n {\"is\", IS},\n {\"isnull\", ISNULL},\n {\"isolation\", ISOLATION},\n {\"join\", JOIN},\n {\"key\", KEY},\n {\"lancompiler\", LANCOMPILER},\n...\n\nThis table should in fact be the definitive guide, since it's the array\nthat the parser uses ;-)\n\nAnd it's mentioned in the HISTORY file as part of the MVCC\nchanges. They're a couple of these 'gotcha' words that are part of\nthe SQL standard, but hadn't yet been implemented before 6.5 that have\ntriped up people.\n\nIf you have to keep the table name, quote it:\n\ncreate table cell ( \"isolation\" text );\n\nBut then you'll always have to quote it. I'm stuck with a bunch of\nMiXedCaSE tables that I have to do that with.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Mon, 13 Sep 1999 16:46:26 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"isolation\" a restricted word?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n\n> Yup - here it is in pgsql/src/backend/parser/keywords.c:\n\nThanks Reed. I'll just change it since it's restricted.\n\n-Tony\n\n\n",
"msg_date": "Mon, 13 Sep 1999 15:03:02 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Is \"isolation\" a restricted word?"
},
{
"msg_contents": "> > reason, I am getting a parser error:\n> > => create table cell ( isolation text );\n> > ERROR: parser: parse error at or near \"isolation\"\n> > This table had no problems previously; has the word isolation been used\n> > somewhere else as a SQL word? I can't think of why else I am having\n> > problems with the table (the syntax appears to be correct).\n> Yup - here it is in pgsql/src/backend/parser/keywords.c:\n> This table should in fact be the definitive guide, since it's the array\n> that the parser uses ;-)\n\nIt is a definitive guide for keywords, but is a superset of keywords\nwhich are allowed as column names. In this case, ISOLATION was added\nto the syntax but was not added to gram.y as an allowed column id.\nEdit src/backend/parser/gram.y, look for the line starting with\n\"ColId:\", and add ISOLATION to the already long list of keywords which\nfollows.\n\nI'll make the change for v6.6; it could perhaps be used for v6.5.3\nalso, if there is one.\n\n> And it's mentioned in the HISTORY file as part of the MVCC\n> changes. They're a couple of these 'gotcha' words that are part of\n> the SQL standard, but hadn't yet been implemented before 6.5 that have\n> tripped up people.\n\nKeep reporting them, because in some cases we can allow them even\nthough they may be a reserved word in SQL92. But that can lead to\nportability problems, not that I can imagine anyone moving away from\nPostgres ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 02:26:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"isolation\" a restricted word?"
},
{
"msg_contents": "> > > This table had no problems previously; has the word isolation been used\n> > > somewhere else as a SQL word?\n> > And it's mentioned in the HISTORY file as part of the MVCC\n> > changes. They're a couple of these 'gotcha' words that are part of\n> > the SQL standard, but hadn't yet been implemented before 6.5 that have\n> > tripped up people.\n\nbtw, it *is* documented as an SQL92 reserved word and a Postgres\nreserved word in the big docs in the chapter on \"Syntax\".\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 02:59:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Is \"isolation\" a restricted word?"
}
] |
[
{
"msg_contents": "> Postgresql operator with the following syntax\n> \n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so', 'mytype2_to_mytype3'\n> LANGUAGE 'C'\n> \n> The syntax for the AS clause, which was 'AS <link-file>' becomes \n> \n> \tAS <link_file>[, <link_name>]\n\nSounds great !\n\nBut I think the intuitive Syntax in SQL would use ():\n\nCREATE FUNCTION mytype3 ( mytype2 )\n RETURNS mytype3\n AS 'mytypes.so(mytype2_to_mytype3)'\n LANGUAGE 'C'\n\nSyntax:\n\tAS <link_file>[(symbol_name)]\n\nThis is also how Illustra and now Informix does it.\n(Instead of AS they say EXTERNAL NAME)\n\nAndreas\n",
"msg_date": "Tue, 14 Sep 1999 09:44:25 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions"
},
{
"msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n> But I think the intuitive Syntax in SQL would use ():\n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so(mytype2_to_mytype3)'\n> LANGUAGE 'C'\n> Syntax:\n> \tAS <link_file>[(symbol_name)]\n\nI think Bernard had the better solution --- the above presumes that\nfilenames won't ever have parens in them. (Which, admittedly, is a\nbad idea under most Unix shells --- but that doesn't mean we should\nperpetuate the problem.) Also, I'd rather see us keep the platform-\ndependent \".so\" extension at the end of its string, where it's easy\nto spot and fix when needed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 1999 10:53:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions "
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n> \n> But I think the intuitive Syntax in SQL would use ():\n> \n> CREATE FUNCTION mytype3 ( mytype2 )\n> RETURNS mytype3\n> AS 'mytypes.so(mytype2_to_mytype3)'\n> LANGUAGE 'C'\n> \n> Syntax:\n> AS <link_file>[(symbol_name)]\n> \n> This is also how Illustra and now Informix does it.\n> (Instead of AS they say EXTERNAL NAME)\n> \n\nThe syntax \n\n\tAS <link_file>[(symbol_name)] \n\nwould be easy to implement provided I could write your example as\n\n CREATE FUNCTION mytype3 ( mytype2 )\n RETURNS mytype3\n AS 'mytypes.so'('mytype2_to_mytype3')\n LANGUAGE 'C'\n\nThat way link_file and symbol_name both look like string tokens to \nthe parser. If it is implemented the way you write in the example with \n\n\t'mytypes.so(mytype2_to_mytype3)'\n\nThen the parser sees the arguement of the AS clause as a single\nstring token which would have to be parsed separately. Also, there is\nsome ambiguity in this form as to whether the string\n\n\t'mytypes.so(mytype2_to_mytype3)'\n\nis a single filename, or a filename and a link symbol\n\nBernie\n",
"msg_date": "Tue, 14 Sep 1999 15:47:14 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch for user-defined C-language functions"
}
] |
[
{
"msg_contents": ">> BTW, while eyeing the scan.l again, I noticed that C - style comments\n>> can also contain bugs, but I am not completely sure.\nWhat's your theory.\n",
"msg_date": "Tue, 14 Sep 1999 09:57:29 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status report: long-query-string changes"
}
] |
[
{
"msg_contents": "I have the requirement for ISO dates with European format and would\nlike to change backend/utils/adt/dt.c:EncodeDateTime() and EncodeDateOnly()\nto effect this if this is a general requirement.\n\nPlease advise.\n-- \n--------\nRegards\nTheo\n",
"msg_date": "Tue, 14 Sep 1999 13:22:28 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "ISO dates with European Format"
},
{
"msg_contents": "> I have the requirement for ISO dates with European format and would\n> like to change backend/utils/adt/dt.c:EncodeDateTime() and EncodeDateOnly()\n> to effect this if this is a general requirement.\n\nWhat is \"ISO dates with European format\"? Is it a combination of ISO\ndate output with European-style input (which I think can be done\nalready), or something else? afaik ISO-8601 is specific about\nsuggested formats, and makes no distinction between European and other\nconventions. Can you give examples? TIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 13:40:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > I have the requirement for ISO dates with European format and would\n> > like to change backend/utils/adt/dt.c:EncodeDateTime() and EncodeDateOnly()\n> > to effect this if this is a general requirement.\n> \n> What is \"ISO dates with European format\"? Is it a combination of ISO\n> date output with European-style input (which I think can be done\n> already), or something else? afaik ISO-8601 is specific about\n> suggested formats, and makes no distinction between European and other\n> conventions. Can you give examples? TIA\n\nSure -\n\n coza=> set datestyle to 'SQL,European';\n SET VARIABLE\n coza=> select registrationdate from accounts where domain = 'flame.co.za';\n registrationdate \n ---------------------------\n 02/06/1997 00:00:00.00 SAST\n (1 row)\n\nThe above result is correct for dd/mm/yyyy styles\n\n coza=> set datestyle to 'ISO,European';\n SET VARIABLE\n coza=> select registrationdate from accounts where domain = 'flame.co.za'; \n registrationdate \n ----------------------\n 1997-06-02 00:00:00+02\n (1 row)\n\nInstead of 02-06-1997 00:00:00+02\n\nIf ISO is specific regarding formatting of days, month and year then I feel that\nthe \"set datestyle to 'ISO,European'\" should give an error. However, I would\npersonally\nprefer it to format the result as \"dd-mm-yyyy\".\n\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 14 Sep 1999 16:03:17 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
},
{
"msg_contents": "> > What is \"ISO dates with European format\"? Is it a combination of ISO\n> > date output with European-style input (which I think can be done\n> > already), or something else? afaik ISO-8601 is specific about\n> > suggested formats, and makes no distinction between European and other\n> > conventions. Can you give examples? TIA\n> coza=> set datestyle to 'SQL,European';\n> ...\n> 02/06/1997 00:00:00.00 SAST\n> The above result is correct for dd/mm/yyyy styles\n> coza=> set datestyle to 'ISO,European';\n> ...\n> 1997-06-02 00:00:00+02\n> Instead of 02-06-1997 00:00:00+02\n> If ISO is specific regarding formatting of days, month and year then I feel that\n> the \"set datestyle to 'ISO,European'\" should give an error. However, I would\n> personally prefer it to format the result as \"dd-mm-yyyy\".\n\nAh! The yyyy-mm-dd order is specified by ISO-8601. wrt Postgres, you\nare actually wanting European format with \"-\" as a date delimiter,\nrather than the \"/\".\n\nAs an aside, \"ISO,European\" does actually have meaning, since setting\nthe DateStyle to ISO only fully constrains the output format, but\n\"European\" helps the date parser resolve free-form date input\nambiguities by assuming European, rather than US, conventions for\nordering of input fields.\n\nBut back to the delimiter...\n\nDate conventions between and among countries vary. The formats we\ncurrently have each meet the conventions of multiple countries (not\ncertain about \"German\", since apparently other Germanic countries do\nnot all share the same convention). There are (at least) two things we\ncould do:\n\n1) Parameterize the delimiter field using a #define constant you can\nredefine in Makefile.global, Makefile.custom, or configure. Apparently\nSouth Africa uses the \"-\" convention for date delimiters? Or is this a\nmore local or project-specific preference??\n\n2) Parameterize the delimiter as a global character variable, which\ncan be manipulated by something like \"set DateDelimiter = '-'\". This\nis a little nervous-making for me, since you (and every database user)\nwould have the ability to modify the date format to something that\nPostgres can not read. So we would have to modify the input routines\nto accept an arbitrary delimiter, as well as the conventional\ndelimiters (both \"-\" and \"/\") already recognized. I suppose we could\nput constraints on the \"set DateDelimiter\" values to help protect from\nthis...\n\nYou could also consider massaging the date format as it is displayed\nby your app, since that would give you full control over the\nappearance.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 14:38:09 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Ah! The yyyy-mm-dd order is specified by ISO-8601. wrt Postgres, you\n> are actually wanting European format with \"-\" as a date delimiter,\n> rather than the \"/\".\n\nYuck on ISO and yup to the rest :-).\n \n> As an aside, \"ISO,European\" does actually have meaning, since setting\n> the DateStyle to ISO only fully constrains the output format, but\n> \"European\" helps the date parser resolve free-form date input\n> ambiguities by assuming European, rather than US, conventions for\n> ordering of input fields.\n> \n> But back to the delimiter...\n> \n> Date conventions between and among countries vary. The formats we\n> currently have each meet the conventions of multiple countries (not\n> certain about \"German\", since apparently other Germanic countries do\n> not all share the same convention). There are (at least) two things we\n> could do:\n> \n> 1) Parameterize the delimiter field using a #define constant you can\n> redefine in Makefile.global, Makefile.custom, or configure. Apparently\n> South Africa uses the \"-\" convention for date delimiters? Or is this a\n> more local or project-specific preference??\n\nPretty much project related regarding the delimiter. We tend to use \ndd/mm/yyyy locally. I would prefer to not create a specific postgres.\n\n> 2) Parameterize the delimiter as a global character variable, which\n> can be manipulated by something like \"set DateDelimiter = '-'\". This\n> is a little nervous-making for me, since you (and every database user)\n> would have the ability to modify the date format to something that\n> Postgres can not read. So we would have to modify the input routines\n> to accept an arbitrary delimiter, as well as the conventional\n> delimiters (both \"-\" and \"/\") already recognized. I suppose we could\n> put constraints on the \"set DateDelimiter\" values to help protect from\n> this...\n\nHmmm, a product I helped develop uses two mechanisms for specifying\ndate style. First the format and second the picture. The format\nallows swapping of sub fields within a date and a picture to specify\nthe output. Eg. dd/mm/yyyy as a format with a picture of 99/99/9999 or\nmm/dd/yyyy and 99/99/9999 or dd-mmm-yyyy and 99-xxx-9999. This format\nallows total control over dates (at least in Western countries) ... I\nam happy to donate the code... Windows (int the regional settings) follows\na similar approach.\n \n> You could also consider massaging the date format as it is displayed\n> by your app, since that would give you full control over the\n> appearance.\n\nTrue :-). Thanks for the responses.\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 14 Sep 1999 17:28:21 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
},
{
"msg_contents": "> Hmmm, a product I helped develop uses two mechanisms for specifying\n> date style. First the format and second the picture. The format\n> allows swapping of sub fields within a date and a picture to specify\n> the output. Eg. dd/mm/yyyy as a format with a picture of 99/99/9999 or\n> mm/dd/yyyy and 99/99/9999 or dd-mmm-yyyy and 99-xxx-9999. This format\n> allows total control over dates (at least in Western countries) ... I\n> am happy to donate the code... Windows (int the regional settings) follows\n> a similar approach.\n\nWell, this sounds interesting even if Windows *does* use the same\ntechnique ;)\n\nCertainly contributing the code could be useful. It could make its way\ninto user contrib code, into special built-in formatting functions, or\npossibly into the backend as the default formatting mechanism. Without\nseeing the code and understanding the tradeoffs I can't predict which\nwould be the most suitable, though in any case user contributed code\nis a great way to test out a new technique.\n\nIf you want, post it raw or package it as user contributed code;\neither way, we'll look at it. TIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 15:44:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Well, this sounds interesting even if Windows *does* use the same\n> technique ;)\n> \n> Certainly contributing the code could be useful. It could make its way\n> into user contrib code, into special built-in formatting functions, or\n> possibly into the backend as the default formatting mechanism. Without\n> seeing the code and understanding the tradeoffs I can't predict which\n> would be the most suitable, though in any case user contributed code\n> is a great way to test out a new technique.\n> \n> If you want, post it raw or package it as user contributed code;\n> either way, we'll look at it. TIA\n\nI'll rip it out, repackage it for postgres and send it off within the\nnext week.\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 14 Sep 1999 19:08:51 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ISO dates with European Format"
}
] |
[
{
"msg_contents": "\nHow I can specify explicitly, wich index is need to be used in\nselect query?\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Tue, 14 Sep 1999 16:53:36 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Explicit direction of index using"
},
{
"msg_contents": "> How I can specify explicitly, wich index is need to be used in\n> select query?\n\nafaik, you can't. Indices are completely decoupled from the query\nlanguage, but are usually present in rdbms' as a db-specific\noptimization.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 13:42:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Explicit direction of index using"
}
] |
[
{
"msg_contents": "Hi,\n\nEach time I try to insert an ascii file with the COPY FROM command, I get the following message:\n\n \"ERROR: COPY command, running in backend with effective uid 501 (that's Postgres), could not open file '/usr/local/.../cltclr001' for reading. Error: Permission not allowed (13).\"\n\nWhat rights do I have to put to process the COPY command inside PSQL.\n\nI have try nearly everything, actual rights: uog+rw even on the directory.\n\n\nWhat's wrong.\n\nStephane FILLON\n\n\n\n\n\n\n\nHi,\n \nEach time I try to insert an ascii file with the \nCOPY FROM command, I get the following message:\n \n \"ERROR: COPY command, running in \nbackend with effective uid 501 (that's Postgres), could not open file \n'/usr/local/.../cltclr001' for reading. Error: Permission not allowed \n(13).\"\n \nWhat rights do I have to put to process the COPY \ncommand inside PSQL.\n \nI have try nearly everything, actual rights: \nuog+rw even on the directory.\n \n \nWhat's wrong.\n \nStephane FILLON",
"msg_date": "Wed, 15 Sep 1999 03:50:09 +1100",
"msg_from": "\"=?iso-8859-1?Q?St=E9phane_FILLON?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Permission problem with COPY FROM"
},
{
"msg_contents": "Hi,\n\n\tI've faced that problem too, then I use '\\copy' instread\nof 'copy' because 'copy' command will asked for super user previlege.\nexample\n^^^^^^ -> \\copy '/your location/your filename' to tablename;\n\t\nCheers,\n\nOn Wed, 15 Sep 1999, [iso-8859-1] St�phane FILLON wrote:\n\n> Hi,\n> \n> Each time I try to insert an ascii file with the COPY FROM command, I get the following message:\n> \n> \"ERROR: COPY command, running in backend with effective uid 501 (that's Postgres), could not open file '/usr/local/.../cltclr001' for reading. Error: Permission not allowed (13).\"\n> \n> What rights do I have to put to process the COPY command inside PSQL.\n> \n> I have try nearly everything, actual rights: uog+rw even on the directory.\n> \n> \n> What's wrong.\n> \n> Stephane FILLON\n> \n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n",
"msg_date": "Wed, 15 Sep 1999 11:21:48 +0700 (GMT+0700)",
"msg_from": "Nuchanach Klinjun <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Permission problem with COPY FROM"
},
{
"msg_contents": "Nuchanach Klinjun <[email protected]> writes:\n> \tI've faced that problem too, then I use '\\copy' instread\n> of 'copy' because 'copy' command will asked for super user previlege.\n> example\n> ^^^^^^ -> \\copy '/your location/your filename' to tablename;\n\nIt's not that; the error message Stephane quotes is after the\nPostgres superuser-privilege check:\n\n>> \"ERROR: COPY command, running in backend with effective uid 501\n>> (that's Postgres), could not open file '/usr/local/.../cltclr001' for\n>> reading. Error: Permission not allowed (13).\"\n\nThis is a result of the Unix kernel denying read access to the file.\nIt's got to be a matter of not having read rights on the file or not\nhaving lookup (x) rights on one of the directories above it.\n\npsql's \\copy is often a better choice than the regular SQL COPY command,\nthough. It reads or writes the file with the privileges of the user\nrunning psql, rather than those of the Postgres server, which is usually\na Good Thing. Also, if you are contacting a server on a different\nmachine, \\copy works with files in the local filesystem, not the\nserver's filesystem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Sep 1999 10:00:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Permission problem with COPY FROM "
},
{
"msg_contents": "On Wed, Sep 15, 1999 at 03:50:09AM +1100, Stïż½phane FILLON wrote:\n...\n> \n> \"ERROR: COPY command, running in backend with effective uid 501\n> (that's Postgres), could not open file '/usr/local/.../cltclr001' for\n> reading. Error: Permission not allowed (13).\"\n> \n\nThe problem does not seem to be with the permissions --- it has been\non the list a number of times already. Try using psql's \\copy instead.\n\nAlbert.\n\n\n-- \n\n---------------------------------------------------------------------------\n Post an / Mail to / Skribu al: Albert Reiner <[email protected]>\n---------------------------------------------------------------------------\n",
"msg_date": "Wed, 15 Sep 1999 18:14:10 +0200",
"msg_from": "\"Albert REINER\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Permission problem with COPY FROM"
}
] |
[
{
"msg_contents": "Hi,\n\nThe UNIQUE constraint doesn't work on a field if I use a DEFAULT clause on a table.\n\nThe following table works with UNIQUE constraint:\n\ncreate table cltclt001(\n tcid int2,\n tcnom text unique\n);\n\nbut this one accept several same tcnom value:\n\ncreate table cltclt001(\n tcid int2 default nextval('cltcls001'),\n tcnom text unique\n);\n\n\nWhat's wrong with my table ?\n\nThanks in advance.\n\nStephane FILLON\n\n\n\n\n\n\n\nHi,\n \nThe UNIQUE constraint doesn't work on a field if \nI use a DEFAULT clause on a table.\n \nThe following table works with UNIQUE \nconstraint:\n \ncreate table cltclt001(\n tcid int2,\n tcnom text unique\n);\n \nbut this one accept several same tcnom \nvalue:\n \ncreate table cltclt001(\n tcid int2 default \nnextval('cltcls001'),\n tcnom text unique\n);\n \n \nWhat's wrong with my table ?\n \nThanks in advance.\n \nStephane FILLON",
"msg_date": "Wed, 15 Sep 1999 05:14:15 +1100",
"msg_from": "\"=?iso-8859-1?Q?St=E9phane_FILLON?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG with UNIQUE clause"
},
{
"msg_contents": "> The UNIQUE constraint doesn't work on a field if I use a DEFAULT\n> clause on a table.\n> The following table works with UNIQUE constraint:\n> but this one accept several same tcnom value:\n> create table cltclt001(\n> tcid int2 default nextval('cltcls001'),\n> tcnom text unique\n> );\n> What's wrong with my table ?\n\nNothing. You have stumbled across a bug recently discovered by Mark\nDalphin <[email protected]> in the parser. It was repaired in the\nsource trees 1999-08-15 so will appear in v6.5.2 (any day now) and\nv6.6.\n\npostgres=> create sequence cltcls001;\nCREATE\npostgres=> insert into cltclt001 (tcnom) values ('one');\nINSERT 150559 1\npostgres=> insert into cltclt001 (tcnom) values ('one');\nERROR: Cannot insert a duplicate key into a unique index\n\nI imagine that the repair is posted to the patches or hacker's mailing\nlist; look in the archives around that date and you should be able to\npatch your existing recent system.\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 15 Sep 1999 01:52:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BUG with UNIQUE clause"
},
{
"msg_contents": "\"=?iso-8859-1?Q?St=E9phane_FILLON?=\" <[email protected]> writes:\n> The UNIQUE constraint doesn't work on a field if I use a DEFAULT clause\n> on a table.\n\nThis sounds closely related to a fix that Thomas Lockhart just made.\nIIRC the complained-of symptom was that PRIMARY KEY on one column plus\nUNIQUE on another didn't work, but the real problem was that PRIMARY\nKEY implies UNIQUE and the table declaration code was getting confused\nby two different UNIQUE columns in one table. It could be that his fix\naddresses your problem too. Check the pghackers archives for the\nlast couple weeks to find the patch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Sep 1999 09:46:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BUG with UNIQUE clause "
}
] |
[
{
"msg_contents": "Hi im surfing the net to try and find someone who can help me. A friend of\nmine told me that there are some people who know how to get sites listed at\nthe top of the search engines. I run an adult website am willing to pay\nsomeone who can do this.\n\n\n\n\n\n\n",
"msg_date": "Wed, 15 Sep 1999 08:32:15 +1000",
"msg_from": "\"John Henry\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "- WANTED"
}
] |
[
{
"msg_contents": "Dear Sir,\n\nI had some problem with configured the ODBC. I worked on MS Access97 and\nASP as frontend and use database on Postgres via ODBC. It used to work well\ntil the Postgres Db was upgraded to 6.5.1 version. Now,I use Postodbc\n6.40.0007. ^^^^^^ \n^^^^^^^^^^\nThe error message which I got from psqlodbc.log is\n\nconn=97458576, SQLDriverConnect( in)='DSN=access2; UID=nuchk;\nPWD=araigodai;', fDriverCompletion=0\nDSN info:\nDSN='access2',server='myserver.co.th',port='5432',dbase='mydb',user='myname',passwd='mypass'\n\nreadonly='1',protocol='6.4',showoid='0',fakeoidindex='0',showsystable='0'\n conn_settings=''\n translation_dll='',translation_option=''\nGlobal Options: Version='06.40.0007', fetch=100, socket=4096,\nunknown_sizes=0, max_varchar_size=254, max_longvarchar_size=8190\n disable_optimizer=1, ksqo=1, unique_index=0,\nuse_declarefetch=0\n text_as_longvarchar=1, unknowns_as_longvarchar=0,\nbools_as_char=1\n extra_systable_prefixes='dd_;', conn_settings=''\nconn=97458576, query=' '\nconn=97458576, query='set DateStyle to 'ISO''\nconn=97458576, query='set geqo to 'OFF''\nconn=97458576, query='set ksqo to 'ON''\nconn=97458576, query='select oid from pg_type where typname='lo''\n [ fetched 0 rows ]\nconn=97458576,\nSQLDriverConnect(out)='DSN=access2;DATABASE=access;SERVER=safari.inet.co.th;PORT=5432;UID=nuchk;PWD=araigodai;READONLY=1;PROTOCOL=6.4;FAKEOIDINDEX=0;SHOWOIDCOLUMN=0;ROWVERSIONING=0;SHOWSYSTEMTABLES=0;CONNSETTINGS='\nCONN ERROR: func=SQLGetInfo, desc='', errnum=209, errmsg='Unrecognized key passed to SQLGetInfo.'\n\n ------------------------------------------------------------\n henv=97453632, conn=97458576, status=1, num_stmts=16\n sock=97453648, stmts=97453696, lobj_type=-999\n ---------------- Socket Info -------------------------------\n socket=2948, reverse=0, errornumber=0, errormsg='(null)'\n buffer_in=97464912, buffer_out=97469016\n buffer_filled_in=33, buffer_filled_out=0, buffer_read_in=32\n\nI've tried so many times both re-install the previos version and\npsqlodbc.dll and re-config it again follow the instruction which I\nretrieved from postgres.org website. \n\nI'm always got this error 'Unrecognized key passed to SQLGetInfo.'\nwhat's the key I have to send? please help.\n\nHope to hear from you all soon.\n\nThanx,\nNuch\n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n\n\n\n",
"msg_date": "Wed, 15 Sep 1999 11:27:28 +0700 (GMT+0700)",
"msg_from": "Nuchanach Klinjun <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem with SQLGetInfo"
}
] |
[
{
"msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Thomas Lockhart <[email protected]>\n> writes:\n> > Though it drove me nuts earlier, resisting the\n> temptation to cast into\n> > concrete a short- or medium-range plan has been a\n> real plus for the\n> > project as a whole.\n> \n> The facts of the matter are that contributors work\n> on the problems that\n> they find interesting, and/or the things that are\n> getting in the way of\n> their own use of Postgres at the moment.\n....\n> Indeed, the TODO list is awfully bare-bones; many of\n> the entries don't\n> convey much information to someone who's not already\n> familiar with the\n> issue. Something more fleshed-out would be a useful\n> project.\n> \n> \t\t\tregards, tom lane\n\n>From someone who lurks in this list to see what's\nupcoming in future releases, I have a couple of \ncomments (which may be politically incorrect):\n\n1. The TODO list shows under ENHANCEMENTS as URGENT as\nthe number one item referential integrity. This is\nsomething we need desperately. And since refint.c \nwith MVCC requires recoding our application (which is \ncomposed of 115 C++ objects -- and those are just the\ndatabase related ones), we've been looking forward\nto integrated referential integrity. Particularly \nsince refint.c is broke for cascading updates (it\nsaves\nthe SPI plan). The TODO list shows Jan as having \nclaimed this item -- perhaps he goes away working like\nmad and comes back with a fantastic feature, like the\nrules system -- but I haven't seen any posts by Jan\nin months.\n\n2. How is it that Tom Lane isn't considered \"core\"?\n\nSorry to stir the pot...but I was just curious,\n\nMike Mascari\n([email protected])\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Tue, 14 Sep 1999 21:54:43 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> 1. The TODO list shows under ENHANCEMENTS as URGENT as\n> the number one item referential integrity. This is\n> something we need desperately. And since refint.c\n> with MVCC requires recoding our application (which is\n> composed of 115 C++ objects -- and those are just the\n> database related ones), we've been looking forward\n> to integrated referential integrity. Particularly\n> since refint.c is broke for cascading updates (it\n> saves the SPI plan). The TODO list shows Jan as having\n> claimed this item -- perhaps he goes away working like\n> mad and comes back with a fantastic feature, like the\n> rules system -- but I haven't seen any posts by Jan\n> in months.\n\nI would like to see something from Jan too...\nMy opinion is that RI _MUST_ be implemented in 6.6.\nThere are 3 ways:\n\n1. Using deferrable rules/statement level triggers.\n2. Using transaction log (to read changes made in\n parent/child tables and check RI constraints).\n3. Using DIRTY READ in refint.c\n\nI hope to be able to do 2. or 3., though it would be much \nbetter to have 1. (with statement level triggers) implemented by Jan.\n\nVadim\n",
"msg_date": "Wed, 15 Sep 1999 13:46:15 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "6.6"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> 2. How is it that Tom Lane isn't considered \"core\"?\n\nThe core guys have been here a lot longer. I've only been\nworking with Postgres for a year or so.\n\nThere are quite a few major contributors besides the core four,\nactually.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Sep 1999 10:02:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "> I would like to see something from Jan too...\n> My opinion is that RI _MUST_ be implemented in 6.6.\n> There are 3 ways:\n> \n> 1. Using deferrable rules/statement level triggers.\n> 2. Using transaction log (to read changes made in\n> parent/child tables and check RI constraints).\n> 3. Using DIRTY READ in refint.c\n> \n> I hope to be able to do 2. or 3., though it would be much \n> better to have 1. (with statement level triggers) implemented by Jan.\n\nUh, oh. Vadim has thrown down the hachet on a 6.6 _must_ _have_ item.\n\n(I agree with him, but am glad I didn't have to do it.)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Sep 1999 14:11:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.6"
},
{
"msg_contents": "> Mike Mascari <[email protected]> writes:\n> > 2. How is it that Tom Lane isn't considered \"core\"?\n> \n> The core guys have been here a lot longer. I've only been\n> working with Postgres for a year or so.\n> \n> There are quite a few major contributors besides the core four,\n> actually.\n\nI have renamed the Core group to \"Founders\" on the web site, and changed\n\"Other Major Code Developers\" to \"Major Code Developers\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:22:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
}
] |
[
{
"msg_contents": "\nRegarding your on-line comparison of Prostgresql, Oracle, MySQL and the\nlike, it would help to know which systems support Stored Procedures.\n\nThanks,\n\nMichael Dexter\[email protected]\n\n",
"msg_date": "Wed, 15 Sep 1999 09:20:58 -0700",
"msg_from": "Michael Dexter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comparison Suggestion"
}
] |
[
{
"msg_contents": "\nfreebsd 2.2.7 (PII system)\ngcc 2.95.1\npostgres 6.5.1 no patches\n\nI get these sporadically and I can't trace them to any particular thing other than heavy access.\n\nquery: select serverSequence from estoret order by serverSequence\nProcessQuery\nNOTICE: SIReadEntryData: cache state reset\nTRAP: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1458)\n\n!(RelationNameCache->hctl->nkeys == 10) (0)\n\n\n#0 0x20214151 in ?? ()\n#1 0x202139c3 in ?? ()\n#2 0x109648 in ExcAbort ()\n#3 0x1095a7 in ExcUnCaught ()\n#4 0x1095fa in ExcRaise ()\n#5 0x108e3a in ExceptionalCondition ()\n#6 0x105f33 in RelationCacheInvalidate ()\n#7 0x1040ef in ResetSystemCaches ()\n#8 0xc96c1 in SIReadEntryData ()\n#9 0xc8d4b in InvalidateSharedInvalid ()\n#10 0x104372 in DiscardInvalid ()\n#11 0x2f7b8 in AtStart_Cache ()\n#12 0x2f78a in CommandCounterIncrement ()\n#13 0xd3754 in pg_exec_query_dest ()\n#14 0xd35a4 in pg_exec_query ()\n#15 0xd5118 in PostgresMain ()\n#16 0xb44a8 in DoBackend ()\n#17 0xb3f63 in BackendStartup ()\n#18 0xb3246 in ServerLoop ()\n#19 0xb2a3f in PostmasterMain ()\n#20 0x69aa7 in main ()\n",
"msg_date": "Wed, 15 Sep 1999 17:06:21 -0700",
"msg_from": "Jason Venner <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOTICE: SIReadEntryData: cache state reset TRAP: Failed\n\tAssertion(\"!(RelationNameCache->hctl->nkeys == 10):\",\n\tFile: \"relcache.c\", Line: 1458)"
},
{
"msg_contents": "Jason Venner <[email protected]> writes:\n> I get these sporadically and I can't trace them to any particular\n> thing other than heavy access.\n\n> NOTICE: SIReadEntryData: cache state reset\n> TRAP: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1458)\n> !(RelationNameCache->hctl->nkeys == 10) (0)\n\nYeah. What's happening is that the SI message buffer is overflowing and\nyou are hitting a bug in the code that is supposed to recover from that\ncondition. (I posted a long discussion of what SI is all about a few\ndays ago and don't feel like repeating it --- check the list archives.)\nThere are several bugs in that area :-(.\n\nI believe I have fixed all the problems with SI overflow recovery for\n6.6, but that's part of a rather extensive set of changes to relcache.c\nand sinvaladt.c. We are talking about back-patching these changes along\nwith the not-yet-done relation locking change to make a 6.5.3.\n\nIn the meantime, your best bet might be to reduce the probability of SI\noverflow by raising MAXNUMMESSAGES in src/include/storage/sinvaladt.h.\nIt's standardly 4000, but the space per message is only a couple dozen\nbytes, so you could probably make it 10 times that without hurting\nmuch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 1999 09:48:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NOTICE: SIReadEntryData: cache state reset TRAP: Failed\n\tAssertion(\"!(RelationNameCache->hctl->nkeys == 10):\",\n\tFile: \"relcache.c\", Line: 1458)"
}
] |
[
{
"msg_contents": "\nRC == Release Candidate...\n\nUnless anyone has particular problems with this tar ball before tomorrow\nmorning (7:30EST), I'm going to get rid of the RC and make it the\nrelease...\n\nspeak now or forever hold your piece :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 15 Sep 1999 21:30:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql-6.5.2RC.tar.gz"
}
] |
[
{
"msg_contents": "\nOkay, using 'diff -cr --new-file' to create the patches, I've created one\nfrom 6.5->6.5.1 and one from 6.5.1->6.5.2 ... I'm downloading everything\nto my computer right now to see how the patches work (if they work), but\nif anyone else wants to try, they are up on the site and ready to go...\n\nIf there is somethign else I should be using for that diff, please feel\nfree to mention it and I'll do it all over again...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 15 Sep 1999 21:40:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "patches -- sucker for punishment?"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Okay, using 'diff -cr --new-file' to create the patches, I've created one\n> from 6.5->6.5.1 and one from 6.5.1->6.5.2 ... I'm downloading everything\n> to my computer right now to see how the patches work (if they work), but\n> if anyone else wants to try, they are up on the site and ready to go...\n> \n> If there is somethign else I should be using for that diff, please feel\n> free to mention it and I'll do it all over again...\n\nI'm experiencing some strange problems with that patches. First,\nmy Netscape shows their sizes as approx. 200-300 k. But being downloaded,\nthey are 2-3 megs of size. Second, occasionally I had to download a \npatch (6.5.1->6.5.2) twice, one immediately after another. They were\ndifferent! There was some strings of difference in the middle of \nthe patch. Maybe it was Netscape's quirks. Hope so. As to patches \nthemselves, they seem to be Ok, though I couldn't test them by \ncompiling source (my source tree is somehow slightly damaged), there\nwere no unreasonable hunk fails. I had to use -p1 option with the\npatch to strip top-level dir which is called different on my system.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Thu, 16 Sep 1999 12:32:20 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] patches"
}
] |
[
{
"msg_contents": "I've been playing with selects using explicit join syntax, and have\nsome initial results for inner joins (haven't done anything more about\nouter joins yet; inner joins are tough enough for now).\n\nIt's a real pita to flatten the join expressions into the traditional\nPostgres query tree. It would be nice to start thinking about how to\nrepresent general subqueries or intermediate queries in the parse\ntree.\n\nAnyway, some examples below...\n\n - Thomas\n\npostgres=> select * from t1;\ni| j\n-+--\n1|10\n2|20\n3|30\n(3 rows)\n\npostgres=> select * from t2;\ni| x\n-+---\n1|100\n3|300\n(2 rows)\n\npostgres=> select * from t1 natural join t2;\ni| j| x\n-+--+---\n1|10|100\n3|30|300\n(2 rows)\n\npostgres=> select * from t1 join t2 using (i);\ni| j| x\n-+--+---\n1|10|100\n3|30|300\n(2 rows)\n\npostgres=> select * from t1 join t2 on (t1.i = t2.i);\ni| j|i| x\n-+--+-+---\n1|10|1|100\n3|30|3|300\n(2 rows)\n\npostgres=> select * from t1 natural join t2 natural join t1;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nOh well. Was on a roll 'til then ;)\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 16 Sep 1999 03:03:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join syntax"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> It's a real pita to flatten the join expressions into the traditional\n> Postgres query tree. It would be nice to start thinking about how to\n> represent general subqueries or intermediate queries in the parse\n> tree.\n\nYes. Jan has been saying for some time that he needs that for rules.\nAlso, I have found some squirrely cases in INSERT ... SELECT ... that\ncan't really be done right unless the INSERT and SELECT targetlists\nare kept separate, which seems to mean a two-level parsetree structure.\n\nThe UNION/INTERSECT/EXCEPT code has a really klugy approach to\nmulti-query parse trees, which maybe could be cleaned up if we\nsupported them in a more general fashion.\n\nMaybe it's time to bite the bullet and do it. You have any thoughts\non what the representation should look like?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 1999 10:09:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Join syntax "
},
{
"msg_contents": "> > ... represent general subqueries or intermediate queries in the\n> > parse tree.\n> Maybe it's time to bite the bullet and do it. You have any thoughts\n> on what the representation should look like?\n\nI was hoping you would tell me ;)\n\nI don't have a good feel for the current parse tree (which of course\nhasn't kept me from fooling around with it). But I'll definitely need\nsomething extra or different to implement outer joins. If I were\nkeeping everything else the same, I was thinking of propagating a\n\"join expression\" into the planner/optimizer in the same area as the\nexisting qualification nodes. One of the differences would be that the\nJE marks a node around which the optimizer is not allowed to reorder\nthe plan (since outer joins must be evaluated in a specific order to\nget the right result). But I could just as easily represent this as a\nsubquery node somewhere else in the parse tree.\n\nafaik the planner/optimizer already has the notion of\nmerging/joining/scanning intermediate results, so teaching it to\ninvoke these explicitly from the query tree rather than just\nimplicitly may not be a huge stretch.\n\nbtw I'm currently rewriting the join syntax in gram.y to conform\nbetter to a closer reading of the SQL92 standard. One annoyance is\nthat the standard allows table *and* column aliasing *everywhere*.\ne.g.\n\n select * from (t1 as x1 (i,j,k) join t2 using (i)) as r1 (a,b,c,d)\n\nis (apparently) legal syntax, resulting in rows labeled a-d. Ugh.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 17 Sep 1999 05:58:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Join syntax"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > > ... represent general subqueries or intermediate queries in the\n> > > parse tree.\n> > Maybe it's time to bite the bullet and do it. You have any thoughts\n> > on what the representation should look like?\n> \n> I was hoping you would tell me ;)\n> \n> I don't have a good feel for the current parse tree (which of course\n> hasn't kept me from fooling around with it). But I'll definitely need\n> something extra or different to implement outer joins. If I were\n> keeping everything else the same, I was thinking of propagating a\n> \"join expression\" into the planner/optimizer in the same area as the\n> existing qualification nodes. One of the differences would be that the\n> JE marks a node around which the optimizer is not allowed to reorder\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n> the plan (since outer joins must be evaluated in a specific order to\n ^^^^^^^^\n> get the right result). But I could just as easily represent this as a\n\nAnd this is what we need to have subqueries in FROM!..\n(One a great thing which I want to have so much -:))\n\n> subquery node somewhere else in the parse tree.\n\nVadim\n",
"msg_date": "Fri, 17 Sep 1999 14:15:13 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Join syntax"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Maybe it's time to bite the bullet and do it. You have any thoughts\n>> on what the representation should look like?\n\n> I was thinking of propagating a \"join expression\" into the\n> planner/optimizer in the same area as the existing qualification\n> nodes.\n\nI think it would be best to keep it out of the regular expression-tree\nstuff, for a number of reasons including the issue of not allowing\nreordering.\n\nThe thing I was visualizing was a tree of Query nodes, not queries as\nitems in ordinary expressions within queries. Essentially, we'd allow a\nsub-Query as an entry in the rangetable (the FROM list) of another Query.\nI *think* this is what Jan has been saying he wants in order to do view\nrule rewrites more cleanly. It could also solve my problems with INSERT\n... SELECT.\n\nAside from plain Query nodes (representing a sub-Select) we'd need node\ntypes that represent UNION/INTERSECT/EXCEPT combinations of Queries.\nI don't like the way the current UNION/INTERSECT code overloads\nAND/OR/NOT nodes to do this duty, not least because there's noplace to\nrepresent the \"ALL\" modifier cleanly. I'd rather see a separate set of\nnode types.\n\nI still don't understand the semantics of all those join types you are\nworking on, but I suppose they would be additional node types in this\nQuery tree structure. Should the rangetable itself (which represents\na regular Cartesian-product join of the source tables) become some\nkind of explicit join node? If so, I guess the WHERE clause would be\nattached to this join node, and not to the Query node referencing the\njoin. (Actually, the rangetable should probably continue to exist\nas a list of all the tables referenced anywhere in the Query tree,\nbut we should separate out its implicit use as a representation of\na Cartesian product join and make an explicit node that says what to\njoin, how, and with what restriction clauses. The \"in From clause\"\nflag in RTEs would go away...) \n\nAnother thing it'd be nice to think about while we are at it is how\nto implement SQL92's DISTINCT-inside-an-aggregate-function feature,\neg, \"SELECT COUNT(DISTINCT x), COUNT(DISTINCT y) FROM table\".\nMy thought here is that the cleanest implementation is to have \nsub-Queries like \"SELECT DISTINCT x FROM table\" and then apply the\naggregates over the outputs of those subqueries. Not sure about\ndetails here.\n\n> afaik the planner/optimizer already has the notion of\n> merging/joining/scanning intermediate results, so teaching it to\n> invoke these explicitly from the query tree rather than just\n> implicitly may not be a huge stretch.\n\nYes, the output of the planner is a tree of plan node types, so there\nwould probably be very little change needed there or in the executor.\nWe might need to generalize the notion that a plan node only has\none or two descendants (\"lefttree/righttree\") into N descendants.\n\n\t\t\tregards, tom lane\n\nPS: Has anyone heard from Jan lately? Seems like he's been awfully\nquiet...\n",
"msg_date": "Fri, 17 Sep 1999 10:49:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Join syntax "
}
] |
[
{
"msg_contents": "Dear Pgsql Wizards,\n\nI posted this message three weeks ago on pgsql-bugs and got no reply.\nIs there a maintainer for ecpg? I have also trapped some other bugs /\noddities (e.g. upper case for table names etc. gets lost even if\nquoted).\n\nThe patch does\n- enables the use of bool variables in fields which might become NULL.\n Up to now the lib told you that NULL is not a bool variable, even if\n you provided an indicator.\n\n- the second patch checks whether a value is null and issues an error if\n no indicator is provided.\n\nSidenote: IIRC, the variable should be left alone if the value is NULL.\nECPGlib sets it's value to 0 on NULL. Is this a violation of the\nstandard?\n\nRegards\n Christof\n\nPS: I offer some time for ecpg if there is no current maintainer. Or\nshould I address another list? (pgsql-interfaces?)",
"msg_date": "Thu, 16 Sep 1999 14:20:48 +0200",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there a maintainer for ecpg? patch included"
}
] |
[
{
"msg_contents": "Hello\n\nI have inserted texts with hex codes 1d, 1e and 1f into 6.5.1 and it seems\nto that this characters totaly confuses the server, the vacuum writes that\nthe parent for record does not exist or that the record is to long.\n\nIs that possible?\nI am not sure that this is the reason but when I copy the table to file,\ndestroy it, created again a copy back through filter which removes those\ncharacters, then the server workes well. \n\nShould'n the data be parsed for such a character before they are inserted?\n\nThanks in advance\nRichard Bouska \n\n",
"msg_date": "Thu, 16 Sep 1999 16:16:58 +0200 (CEST)",
"msg_from": "Richard Bouska <[email protected]>",
"msg_from_op": true,
"msg_subject": "1d,1e,1f poison for data?"
},
{
"msg_contents": "Richard Bouska <[email protected]> writes:\n> I have inserted texts with hex codes 1d, 1e and 1f into 6.5.1 and it seems\n> to that this characters totaly confuses the server, the vacuum writes that\n> the parent for record does not exist or that the record is to long.\n\n> Is that possible?\n\nDoesn't seem like that should be a problem (and a quick trial here\ndoesn't show any obvious trouble). I'm guessing the explanation is\nsomething else ... but I'm not sure what.\n\nThere are a couple of known gotchas that might cause trouble at vacuum\ntime:\n\n1. If you run the server with different LOCALE settings at different\ntimes, then the sort order of an existing index might be wrong for\nthe current LOCALE, in which case the system gets very confused. \nDon't do that ;-)\n\n2. If you have an index on a text field, the effective limit on text\nlength is ~4K instead of ~8K, because the btree index code expects to be\nable to fit at least 2 keys on a disk page. This one is nasty because\nif only a few of your entries are >4K you might sail along happily\nuntil one day two long entries chance to wind up on the same page of the\nindex. In particular, VACUUM rearranges the index so the problem could\nshow up at that time.\n\nIf neither of those explain your trouble, please see if you can\ndevelop a reproducible test case, and submit a full bug report.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 1999 11:13:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 1d,1e,1f poison for data? "
}
] |
[
{
"msg_contents": "Hi all,\n\nPlease find attached diffs to the documentation that are intended to\naccompany the CREATE FUNCTION patch that I submitted earlier. I stuck\nwith the syntax in the original patch rather than the alternative\nsyntax suggested by Andreas. \n\nWhen I was altering the xfunc.sgml page I came across this:\n\n <title>Name Space Conflicts</title>\n \n <para>\n As of <productname>Postgres</productname> v6.5,\n <command>CREATE FUNCTION</command> can decouple a C language\n function name from the name of the entry point. This is now the\n preferred technique to accomplish function overloading.\n </para>\n\nwhich seems to suggest that someone had a similar idea in the past. I\ncould find no evidence of this functionality in the 6.5 code though\n\nBernard Frankpitt",
"msg_date": "Thu, 16 Sep 1999 16:51:33 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doccumentation Patch for Create Function"
},
{
"msg_contents": "\nApplied Docs too.\n\n\n> Hi all,\n> \n> Please find attached diffs to the documentation that are intended to\n> accompany the CREATE FUNCTION patch that I submitted earlier. I stuck\n> with the syntax in the original patch rather than the alternative\n> syntax suggested by Andreas. \n> \n> When I was altering the xfunc.sgml page I came across this:\n> \n> <title>Name Space Conflicts</title>\n> \n> <para>\n> As of <productname>Postgres</productname> v6.5,\n> <command>CREATE FUNCTION</command> can decouple a C language\n> function name from the name of the entry point. This is now the\n> preferred technique to accomplish function overloading.\n> </para>\n> \n> which seems to suggest that someone had a similar idea in the past. I\n> could find no evidence of this functionality in the 6.5 code though\n> \n> Bernard Frankpitt\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:32:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Doccumentation Patch for Create Function"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bernard Frankpitt <[email protected]> wrote (a couple weeks ago):\n> > When I was altering the xfunc.sgml page I came across this:\n> \n> \" \" \"\n> >\n> That's talking about builtin functions, ie functions implemented by\n> statically-linked routines in the standard backend. \n> \n> regards, tom lane\n\nOh, That explains it. It didn't occur to me that people adding custom \nfunctionality to the backend wouldn't use a dynamically linked\ninterface. In fact it occured to me that it might be a good idea to\nconvert some of `poor relations' the stuff like gist, and perhaps rtrees\nto dynamically linked modules in /contrib. They might then provide\nbetter examples of how to develop and link major extensions into the\nbackend in a relatively painless way. Also an exercise like that would\nreally provide a good opportunity to define and document the backend\ncode interfaces between the executor and access methods, and between\naccess methods and the low-level database functionality (buffer\nmanagement, tuple time-validation etc.). Once I finish my dissertation,\nI was sort of planning to start chipping away at some documentation for\nthe code internals. \n\nTo me, the extensibility features and open design of PostgreSQL are its\nmost exciting features, and I think that a good set of documents on the\ninternal functionality and interfaces would be rewarded in the long term\nby innovative features and unusual applications from developers in a\nwide variety of fields. \n\nBernie\n",
"msg_date": "Sat, 02 Oct 1999 14:22:40 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Doccumentation Patch for Create Function"
},
{
"msg_contents": "Bernard Frankpitt <[email protected]> wrote (a couple weeks ago):\n> When I was altering the xfunc.sgml page I came across this:\n\n> <title>Name Space Conflicts</title>\n> <para>\n> As of <productname>Postgres</productname> v6.5,\n> <command>CREATE FUNCTION</command> can decouple a C language\n> function name from the name of the entry point. This is now the\n> preferred technique to accomplish function overloading.\n> </para>\n\n> which seems to suggest that someone had a similar idea in the past. I\n> could find no evidence of this functionality in the 6.5 code though\n\nThat's talking about builtin functions, ie functions implemented by\nstatically-linked routines in the standard backend. The SQL name is\nnow distinct from the C-language name, but that wasn't true before 6.5.\nI kind of thought you had seen this and realized it would be a good\nidea to have the same functionality for dynamically linked routines.\n\nIf you came up with the idea independently, it must clearly be a good\nthing ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Oct 1999 15:27:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Doccumentation Patch for Create Function "
}
] |
[
{
"msg_contents": "I submitted a patch to the patches list several\nmonths ago which implemented Oracle's TRUNCATE TABLE\nstatement in PostgreSQL and was wondering whether or\nnot the patch was going to make it into current. I had\nthe patch ready before the 6.5 release but 6.5 was\nalready in beta at the time, so I waited until the\n6.5 tree was split.\n\nRecent discussions with regard to the functioning of \nDROP TAPLE in transactions and changing heap_openr()\nto require a locking type affect the nature of the \npatch. TRUNCATE TABLE behaves like Oracle's DDL\nstatements by committing the running transaction and\nstarting a new one for the TRUNCATE operation\n(which generates no rollback information), or, as \nBruce Momjian puts it \"cheating\".\n\nAny news?\n\nJust curious,\n\nMike Mascari\n([email protected])\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Thu, 16 Sep 1999 18:06:18 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "TRUNCATE TABLE patch"
},
{
"msg_contents": "Applied. Sorry for the delay. I think Tom Lane fixed the function\ncalls.\n\n\n> I submitted a patch to the patches list several\n> months ago which implemented Oracle's TRUNCATE TABLE\n> statement in PostgreSQL and was wondering whether or\n> not the patch was going to make it into current. I had\n> the patch ready before the 6.5 release but 6.5 was\n> already in beta at the time, so I waited until the\n> 6.5 tree was split.\n> \n> Recent discussions with regard to the functioning of \n> DROP TAPLE in transactions and changing heap_openr()\n> to require a locking type affect the nature of the \n> patch. TRUNCATE TABLE behaves like Oracle's DDL\n> statements by committing the running transaction and\n> starting a new one for the TRUNCATE operation\n> (which generates no rollback information), or, as \n> Bruce Momjian puts it \"cheating\".\n> \n> Any news?\n> \n> Just curious,\n> \n> Mike Mascari\n> ([email protected])\n> \n> \n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:24:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TRUNCATE TABLE patch"
}
] |
[
{
"msg_contents": "Hello all,\n\nCache invalidation mechanism was much improved. \nThanks to Tom.\n\nBut as far as I see,neither relation cache nor system catalog cache \naren't be rollbacked correctly.\nThis should be solved if we would execute DDL statement inside \ntransactions. \n\nFor example,\n\n\tcreate table t1 (id int4);\n\tCREATE\n\tbegin;\n\tBEGIN\n\talter table t1 add column dt1 text;\n\tADD\n\tselect * from t1;\n\tid|dt1\n\t--+---\n\t(0 rows)\n\n\tabort;\n\tABORT\n\tvisco=> select * from t1;\n\tid|dt1\n\t--+---\n\t(0 rows)\n\nI added time_qualification_check to SearchSysCache() on trial\n(see the patch at the end of this posting).\n\nAfter this change,\n\t.\n\t.\n\tabort;\n\tABORT\n\tselect * from t1;\n\tERROR: Relation t1 does not have attribute dt1\n\nSeems relation cache is not invalidated yet.\nI also tried to add time_qualification_check to RelationId(Name)-\nCacheGetRelation(). But unfortunately,Relation doesn't have\nsuch a information.\n\nAny ideas ?\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** utils/cache/catcache.c.orig\tMon Jul 26 12:45:14 1999\n--- utils/cache/catcache.c\tFri Sep 17 08:57:50 1999\n***************\n*** 872,878 ****\n \t\t\t\t\tcache->cc_skey,\n \t\t\t\t\tres);\n \t\tif (res)\n! \t\t\tbreak;\n \t}\n \n \t/* ----------------\n--- 872,881 ----\n \t\t\t\t\tcache->cc_skey,\n \t\t\t\t\tres);\n \t\tif (res)\n! \t\t{\n! \t\t\tif (HeapTupleSatisfiesNow(ct->ct_tup->t_data))\n! \t\t\t\tbreak;\n! \t\t}\n \t}\n \n \t/* ----------------\n\n",
"msg_date": "Fri, 17 Sep 1999 10:40:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "couldn't rollback cache ?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> But as far as I see,neither relation cache nor system catalog cache \n> aren't be rollbacked correctly.\n> I added time_qualification_check to SearchSysCache() on trial\n> (see the patch at the end of this posting).\n\nHmm. This must be a bug of very long standing; surprising it hasn't\nbeen noticed before. I think you are probably right, because a little\nglimpsing shows that SearchSysCache() is the *only* place in the whole\nsystem where HeapKeyTest() is called directly --- everyone else goes\nthrough HeapTupleSatisfies() which adds a timequal check of one sort or\nanother. I don't know the timequal stuff at all, but it seems likely\nthat we want one here. (Vadim, is this fix right?)\n\n> After this change,\n> \tabort;\n> \tABORT\n> \tselect * from t1;\n> \tERROR: Relation t1 does not have attribute dt1\n\n> Seems relation cache is not invalidated yet.\n> I also tried to add time_qualification_check to RelationId(Name)-\n> CacheGetRelation(). But unfortunately,Relation doesn't have\n> such a information.\n\nI think the real bug here is in inval.c: see\nInvalidationMessageCacheInvalidate, which scans pending SI messages\nat abort time. If we had committed, we'd have sent an SI message\ntelling other backends to refresh their relcache entries for t1;\nso there is an entry for t1 in the pending-SI-message list. We can\nuse that entry to tell us to invalidate our own relcache entry instead.\nIt looks like this is done correctly for tuple SI messages but not for\nrelation SI messages --- and in fact the code sez\n\t\t\t/* XXX ignore this--is this correct ??? */\nEvidently not. (BTW, please add some comments to this code! It's\nnot obvious that what it's doing is throwing away cache entries that\nhave been changed by a transaction now being aborted.)\n\nIt would probably be a good idea to switch the order of the operations\nin AtAbort_Cache() in xact.c, so that relcache reference counts get\nreset to 0 before we do the pending-SI-message scan. That way, we could\njust discard the bogus relcache entry and not waste time rebuilding an\nentry we might not need again. (This might even be essential to avoid\nan error in the aborted-table-create case; not sure. The routine\nRelationCacheAbort() didn't exist till last week, so the present call\norder is certainly not gospel.)\n\nWe could probably do this in other ways too, like marking all relcache\nentries with the transaction ID of their last change and using that to\ndetect what to throw away. But the SI message queue is already there\nso might as well use it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Sep 1999 09:57:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] couldn't rollback cache ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > But as far as I see,neither relation cache nor system catalog cache\n> > aren't be rollbacked correctly.\n> > I added time_qualification_check to SearchSysCache() on trial\n> > (see the patch at the end of this posting).\n> \n> Hmm. This must be a bug of very long standing; surprising it hasn't\n> been noticed before. I think you are probably right, because a little\n> glimpsing shows that SearchSysCache() is the *only* place in the whole\n> system where HeapKeyTest() is called directly --- everyone else goes\n> through HeapTupleSatisfies() which adds a timequal check of one sort or\n> another. I don't know the timequal stuff at all, but it seems likely\n> that we want one here. (Vadim, is this fix right?)\n\nSorry, but currently I have no ability to deal with anything\nbut WAL. As for cache/SI issues, I would like to see shared \ncatalog cache implemented (and remove current SI stuff),\nbut I was not able to do it for ~ 2 years, -:(\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 10:17:18 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] couldn't rollback cache ?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Friday, September 17, 1999 10:58 PM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] couldn't rollback cache ? \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > But as far as I see,neither relation cache nor system catalog cache \n> > aren't be rollbacked correctly.\n> > I added time_qualification_check to SearchSysCache() on trial\n> > (see the patch at the end of this posting).\n> \n> Hmm. This must be a bug of very long standing; surprising it hasn't\n> been noticed before. I think you are probably right, because a little\n> glimpsing shows that SearchSysCache() is the *only* place in the whole\n> system where HeapKeyTest() is called directly --- everyone else goes\n> through HeapTupleSatisfies() which adds a timequal check of one sort or\n> another. I don't know the timequal stuff at all, but it seems likely\n> that we want one here. (Vadim, is this fix right?)\n> \n> > After this change,\n> > \tabort;\n> > \tABORT\n> > \tselect * from t1;\n> > \tERROR: Relation t1 does not have attribute dt1\n> \n> > Seems relation cache is not invalidated yet.\n> > I also tried to add time_qualification_check to RelationId(Name)-\n> > CacheGetRelation(). But unfortunately,Relation doesn't have\n> > such a information.\n> \n> I think the real bug here is in inval.c: see\n> InvalidationMessageCacheInvalidate, which scans pending SI messages\n> at abort time. If we had committed, we'd have sent an SI message\n> telling other backends to refresh their relcache entries for t1;\n> so there is an entry for t1 in the pending-SI-message list. We can\n> use that entry to tell us to invalidate our own relcache entry instead.\n> It looks like this is done correctly for tuple SI messages but not for\n\nI think it's not done correctly for tuple SI messages either.\nI didn't use current cache invalidation mechanism when I made the\npatch for SearchSysCache() because of the following 2 reasons. \n\n1. SI messages are eaten by CommandCounterIncrement(). So they\n may vanish before transaction end/aborts.\n2. The tuples which should be invalidated in case of abort are different\n from ones in case of commit.\n In case of commit,deleting old tuples should be invalidated for all\n backends.\n In case of abort,insert(updat)ing new tuples should be invalidated \n for the insert(updat)ing backend.\n Currently heap_insert() calls RelationInvalidateHeapTuple() for a \n inserting new tuple but heap_replace() doesn't call RelationInvalid-\n ateHeapTuple() for a updating new tuple. I don't understand which\n is right.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 20 Sep 1999 11:47:49 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] couldn't rollback cache ? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I think it's not done correctly for tuple SI messages either.\n> I didn't use current cache invalidation mechanism when I made the\n> patch for SearchSysCache() because of the following 2 reasons. \n\n> 1. SI messages are eaten by CommandCounterIncrement(). So they\n> may vanish before transaction end/aborts.\n\nI think this is OK. The sending backend does not send the SI message\nin the first place until it has committed. Other backends can read\nthe messages at CommandCounterIncrement; it doesn't matter whether the\nother backends later commit or abort their own transactions. I think.\nDo you have a counterexample?\n\n> 2. The tuples which should be invalidated in case of abort are different\n> from ones in case of commit.\n> In case of commit,deleting old tuples should be invalidated for all\n> backends.\n> In case of abort,insert(updat)ing new tuples should be invalidated \n> for the insert(updat)ing backend.\n\nI wonder whether it wouldn't be cleaner to identify the target tuples\nby OID instead of ItemPointer. That way would work for both new and\nupdate tuples...\n\n> Currently heap_insert() calls RelationInvalidateHeapTuple() for a \n> inserting new tuple but heap_replace() doesn't call RelationInvalid-\n> ateHeapTuple() for a updating new tuple. I don't understand which\n> is right.\n\nHmm. Invalidating the old tuple is the right thing for heap_replace in\nterms of sending a message to other backends at commit; it's the old\ntuple that they might have cached and need to get rid of. But for\ngetting rid of this backend's uncommitted new tuple in case of abort,\nit's not the right thing. OTOH, your change to add a time qual check\nto SearchSysCache would fix that, wouldn't it? But invalidating by OID\nwould make the issue moot.\n\nPossibly heap_insert doesn't need to be calling\nRelationInvalidateHeapTuple at all --- a new tuple can't be cached by\nany other backend, by definition, until it has committed; so there's no\nneed to send out an SI message for it. That call must be there to\nensure that the local cache gets purged of the tuple in case of abort.\nMaybe we could remove that call (and reduce SI traffic) if we rely on\na time qual to purge bogus entries from the local caches after abort.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:28:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] couldn't rollback cache ? "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, September 20, 1999 11:28 PM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] couldn't rollback cache ?\n>\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I think it's not done correctly for tuple SI messages either.\n> > I didn't use current cache invalidation mechanism when I made the\n> > patch for SearchSysCache() because of the following 2 reasons.\n>\n> > 1. SI messages are eaten by CommandCounterIncrement(). So they\n> > may vanish before transaction end/aborts.\n>\n> I think this is OK. The sending backend does not send the SI message\n> in the first place until it has committed. Other backends can read\n\nDoesn't the sending backend send the SI message when Command-\nCounterIncrement() is executed ?\nAtCommit_Cache() is called not only from CommitTransaction() but\nalso from CommandCounterIncrement().\n\nAtCommit_Cache() in CommandCounterIncrement() eats local\ninvalidation messages and register SI information (this seems too\nearly for other backends though it's not so harmful). Then AtAtart_\nCache() eats the SI information and invalidates related syscache\nand relcache for the backend(this seems right). At this point,invali-\ndation info for the backend vanishes. Isn't it right ?\n\n> the messages at CommandCounterIncrement; it doesn't matter whether the\n> other backends later commit or abort their own transactions. I think.\n> Do you have a counterexample?\n>\n> > 2. The tuples which should be invalidated in case of abort are different\n> > from ones in case of commit.\n> > In case of commit,deleting old tuples should be invalidated for all\n> > backends.\n> > In case of abort,insert(updat)ing new tuples should be invalidated\n> > for the insert(updat)ing backend.\n>\n> I wonder whether it wouldn't be cleaner to identify the target tuples\n> by OID instead of ItemPointer. That way would work for both new and\n> update tuples...\n>\n\nThis may be a better way because the cache entry which should be\ninvalidated are invalidated.\nHowever,we may invalidate still valid cache entry by OID(it's not so\nharmful). Even time qualification is useless in this case.\n\n> > Currently heap_insert() calls RelationInvalidateHeapTuple() for a\n> > inserting new tuple but heap_replace() doesn't call RelationInvalid-\n> > ateHeapTuple() for a updating new tuple. I don't understand which\n> > is right.\n>\n> Hmm. Invalidating the old tuple is the right thing for heap_replace in\n> terms of sending a message to other backends at commit; it's the old\n> tuple that they might have cached and need to get rid of. But for\n> getting rid of this backend's uncommitted new tuple in case of abort,\n> it's not the right thing. OTOH, your change to add a time qual check\n> to SearchSysCache would fix that, wouldn't it?\n\nProbably. Because time qualification is applied for uncommitted tuples.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 21 Sep 1999 18:58:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] couldn't rollback cache ? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> I think this is OK. The sending backend does not send the SI message\n>> in the first place until it has committed. Other backends can read\n\n> Doesn't the sending backend send the SI message when Command-\n> CounterIncrement() is executed ?\n> AtCommit_Cache() is called not only from CommitTransaction() but\n> also from CommandCounterIncrement().\n\nOooh, you are right. I think that is a bug. We should postpone\nsending SI messages until commit. Also, if I recall correctly,\nCommandCounterIncrement() still gets called after we have decided to\nabort the current transaction (while we are eating commands looking for\nEND or ABORT). It shouldn't do anything to the pending-SI list then\neither.\n\n> AtCommit_Cache() in CommandCounterIncrement() eats local\n> invalidation messages and register SI information (this seems too\n> early for other backends though it's not so harmful). Then AtAtart_\n> Cache() eats the SI information and invalidates related syscache\n> and relcache for the backend(this seems right). At this point,invali-\n> dation info for the backend vanishes. Isn't it right ?\n\nI think it is OK for AtStart_Cache to read *incoming* SI messages,\nif those relate to transactions that other backends have committed.\nBut we should sit on our own list of pending outgoing messages until\nwe know we are committing (or aborting).\n\n>> I wonder whether it wouldn't be cleaner to identify the target tuples\n>> by OID instead of ItemPointer. That way would work for both new and\n>> update tuples...\n\n> This may be a better way because the cache entry which should be\n> invalidated are invalidated.\n> However,we may invalidate still valid cache entry by OID(it's not so\n> harmful). Even time qualification is useless in this case.\n\nDoesn't bother me --- we'll just re-read it. We'd have to do some work\nin that case anyway to verify whether we have the correct copy of the\ntuple.\n\n>> OTOH, your change to add a time qual check\n>> to SearchSysCache would fix that, wouldn't it?\n\n> Probably. Because time qualification is applied for uncommitted tuples.\n\nOne thing we need to think about here: as it stands, the syscache will\nonly store a single copy of any particular tuple (maybe I should say\n\"of a particular OID\"). But there might be several copies of that tuple\nwith different t_min/t_max in the database. With your change to check\ntime qual, as soon as we realize that the copy we have no longer\nSatisfiesNow(), we'll go look for a new copy. And we'll go look\nfor a new copy after receiving a SI message indicating someone else has\ncommitted an update. The question is, are there any *other* times where\nwe need to look for a new copy? I think we are OK if we change SI\nmessage sending to only send after commit, but I'm not sure about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 10:43:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] couldn't rollback cache ? "
},
{
"msg_contents": "> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> I think this is OK. The sending backend does not send the SI message\n> >> in the first place until it has committed. Other backends can read\n> \n> > Doesn't the sending backend send the SI message when Command-\n> > CounterIncrement() is executed ?\n> > AtCommit_Cache() is called not only from CommitTransaction() but\n> > also from CommandCounterIncrement().\n> \n> Oooh, you are right. I think that is a bug. We should postpone\n> sending SI messages until commit. Also, if I recall correctly,\n> CommandCounterIncrement() still gets called after we have decided to\n> abort the current transaction (while we are eating commands looking for\n> END or ABORT). It shouldn't do anything to the pending-SI list then\n> either.\n> \n> > AtCommit_Cache() in CommandCounterIncrement() eats local\n> > invalidation messages and register SI information (this seems too\n> > early for other backends though it's not so harmful). Then AtAtart_\n> > Cache() eats the SI information and invalidates related syscache\n> > and relcache for the backend(this seems right). At this point,invali-\n> > dation info for the backend vanishes. Isn't it right ?\n> \n> I think it is OK for AtStart_Cache to read *incoming* SI messages,\n> if those relate to transactions that other backends have committed.\n> But we should sit on our own list of pending outgoing messages until\n> we know we are committing (or aborting).\n>\n\nWhat about delet(updat)ing backend itself ?\nShouldn't the backend invalidate delet(updat)ing old tuples ?\n \n> >> I wonder whether it wouldn't be cleaner to identify the target tuples\n> >> by OID instead of ItemPointer. That way would work for both new and\n> >> update tuples...\n>\n\n[snip]\n \n> One thing we need to think about here: as it stands, the syscache will\n> only store a single copy of any particular tuple (maybe I should say\n> \"of a particular OID\"). But there might be several copies of that tuple\n> with different t_min/t_max in the database. With your change to check\n> time qual, as soon as we realize that the copy we have no longer\n> SatisfiesNow(), we'll go look for a new copy. And we'll go look\n> for a new copy after receiving a SI message indicating someone else has\n> committed an update. The question is, are there any *other* times where\n> we need to look for a new copy? I think we are OK if we change SI\n> message sending to only send after commit, but I'm not sure about it.\n>\n\nWhat kind of tuples are read into system catalog cache ?\nAnd what should we do to invalidate the tuples just in time ?\nAs far as I see,\n\n1. HEAP_XMIN_INVALID\n i.e the tuple wasn't inserted \n No backend regards this tuple as valid. \n\n2. HEAP_XMIN_??????? && HEAP_XMAX_INVALID\n i.e the tuple is being inserted now. \n Only inserting backend regards this tuple as valid.\n Time qualification check which my patch does would \n work in this case.\n Otherwise SI message should be sent to the backend \n when insertion is rollbacked.\n\n3. HEAP_XMIN_??????? && HEAP_XMAX_???????\n i.e the tuple is being deleted after insertion in a transaction\n now. \n No backend regards this tuple as valid.\n\n4. HEAP_XMIN_COMMITTED && HEAP_XMAX_INVALID\n i.e the tuple is inserted,not deleted and not being deleted.\n HeapTupleSatisifies..() doesn't take effect.\n SI message should be sent to all backends immediately\n after the tuple was deleted.\n SI message should be sent to a backend immediately after\n the backend marked the tuple as being deleted.\n\n5. HEAP_XMIN_COMMITTED && HEAP_XMAX_???????\n i.e the tuple is being deleted now. \n Deleting backend doesn't regard this tuple as valid.\n If SI messages are postponed to send for other backends\n until commit,the tuple is invalidated correctly.\n Otherwise a check as my patch does would be necessary.\n\n6. HEAP_XMAX_COMMITTED\n i.e the tuple is deleted\n SnapshotNow never regard this tuple as valid.\n\nRegards.\n \nHiroshi Inoue\[email protected] \n",
"msg_date": "Wed, 22 Sep 1999 19:12:00 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] couldn't rollback cache ? "
}
] |
[
{
"msg_contents": "\nAdded to TODO. This will improve VACUUM ANALYZE performance, thought I\ndon't think we have btree comparison functions for all data types,\nthough we should:\n\n* change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n\n> > > Also, I have idea about using '<' '>' in vacuum:\n> > > what if try to use btree BT_ORDER functions which allow\n> > > to compare vals for many data types (btXXXcmp functions in\n> > > nbtcompare.c).\n> > \n> > I see, use a btree index to tell use how selective the > or < is? An\n> > interesting idea. Isn't there a significant performance problem with\n> > this?\n> \n> Don't use btree index, but use btree functions to compare\n> two values of a datatype. You call\n> \tfunc_operator = oper(\"<\",...\n> \"=\"\n> \">\"\n> but this's not right way in common case: operators may be \n> overloaded. \n> \n> These functions are stored in catalog.\n> To get function for a datatype btree call\n> \n> proc = index_getprocid(rel, 1, BTORDER_PROC);\n> \n> Look @ nbtcompare.c:\n> \n> * These functions are stored in pg_amproc. For each operator class\n> * defined on btrees, they compute\n> *\n> * compare(a, b):\n> * < 0 if a < b,\n> * = 0 if a == b,\n> * > 0 if a > b.\n> \n> There are functions for INTs, FLOATs, ...\n> \n> ...But this is not so important thing...\n> \n> Vadim\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Sep 1999 21:49:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: attdisbursion"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO. This will improve VACUUM ANALYZE performance, thought I\n> don't think we have btree comparison functions for all data types,\n> though we should:\n\n> * change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n\nThere are several places that know more than they should about the\nmeaning of \"<\" etc operators. For example, the parser assumes it\nshould use \"<\" and \">\" to implement ORDER BY [DESC]. Making VACUUM\nnot depend on specific names for the ordering operators will not\nimprove life unless we fix *all* of these places.\n\nRather than depending on btree to tell us which way is up, maybe the\npg_type row for a type ought to specify the standard ordering operators\nfor the type directly.\n\nWhile we are at it we could think about saying that there is just one\n\"standard ordering operator\" for a type and it yields a strcmp-like\nresult (minus, zero, plus) rather than several ops yielding booleans.\nBut that'd take a lot of changes in btree and everywhere else...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Sep 1999 10:58:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: attdisbursion "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Added to TODO. This will improve VACUUM ANALYZE performance, thought I\n> > don't think we have btree comparison functions for all data types,\n> > though we should:\n> \n> > * change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n> \n> There are several places that know more than they should about the\n> meaning of \"<\" etc operators. For example, the parser assumes it\n> should use \"<\" and \">\" to implement ORDER BY [DESC]. Making VACUUM\n> not depend on specific names for the ordering operators will not\n> improve life unless we fix *all* of these places.\n\nActually, I thought it would be good for performance reasons, not for\nportability. We would call one function per attribute instead of three.\n\n> \n> Rather than depending on btree to tell us which way is up, maybe the\n> pg_type row for a type ought to specify the standard ordering operators\n> for the type directly.\n> \n> While we are at it we could think about saying that there is just one\n> \"standard ordering operator\" for a type and it yields a strcmp-like\n> result (minus, zero, plus) rather than several ops yielding booleans.\n> But that'd take a lot of changes in btree and everywhere else...\n> \n\nThe btree comparison functions do just that, returning -1,0,1 like\nstrcmp, for each type btree supports.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 11:47:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: attdisbursiont"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> * change VACUUM ANALYZE to use btree comparison functions, not <,=,> calls\n>> \n>> There are several places that know more than they should about the\n>> meaning of \"<\" etc operators. For example, the parser assumes it\n>> should use \"<\" and \">\" to implement ORDER BY [DESC]. Making VACUUM\n>> not depend on specific names for the ordering operators will not\n>> improve life unless we fix *all* of these places.\n\n> Actually, I thought it would be good for performance reasons, not for\n> portability. We would call one function per attribute instead of three.\n\nNo such luck: what VACUUM wants to do is figure out whether the current\nvalue is less than the min-so-far (one \"<\" call), greater than the\nmax-so-far (one \">\") call, and/or equal to the candidate most-frequent\nvalues it has (one \"=\" call apiece). Same number of function calls if\nit's using a \"compare\" function.\n\nI suppose you'd save a little time by only looking up one operator\nfunction per column instead of three, but it's hard to think that'd\nbe measurable let alone significant. There's not going to be any\nper-tuple savings.\n\n>> While we are at it we could think about saying that there is just one\n>> \"standard ordering operator\" for a type and it yields a strcmp-like\n>> result (minus, zero, plus) rather than several ops yielding booleans.\n>> But that'd take a lot of changes in btree and everywhere else...\n\n> The btree comparison functions do just that, returning -1,0,1 like\n> strcmp, for each type btree supports.\n\nRight, and that's useful for btree because it saves compares, but it\ndoesn't really help VACUUM noticeably.\n\nAfter writing the above quote, I realized that you can't really define\na type's ordering just in terms of a strcmp-like operator with no other\nbaggage. That might be enough for building a btree index, but in order\nto *do* anything with the index, the optimizer and executor have to\nunderstand the relationship of the index ordering to the things that\na user would write in a query, such as \"WHERE A >= 12 AND A < 100\"\nor \"ORDER BY column USING >\". So there has to be information relating\nthese user-available operators to the type's ordering, as well.\n(We do have that, in the form of the pg_amop table entries. The point\nis that you can't get away with much less information than is contained\nin pg_amop.)\n\nAs far as I can see, the only thing that's really at stake here is not\nhardwiring the semantics of the operator names \"<\", \"=\", \">\" into the\nsystem. While that'd be nice from a cleanliness/data-type-independence\npoint of view, it's not clear that it has any real practical\nsignificance. Any data type designer who didn't make \"=\" mean equals\nought to be shot anyway ;-). So upon second thought I think I'd put\nthis *way* down the to-do list...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Sep 1999 17:44:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: attdisbursiont "
},
{
"msg_contents": "> As far as I can see, the only thing that's really at stake here is not\n> hardwiring the semantics of the operator names \"<\", \"=\", \">\" into the\n> system. While that'd be nice from a cleanliness/data-type-independence\n> point of view, it's not clear that it has any real practical\n> significance. Any data type designer who didn't make \"=\" mean equals\n> ought to be shot anyway ;-). So upon second thought I think I'd put\n> this *way* down the to-do list...\n\nRemoved from TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 21:34:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: attdisbursiont"
}
] |
[
{
"msg_contents": "1. Why did the new pgaccess get installed into REL6_5 branch but not\nmain development branch?\n\n2. New pgaccess no longer has a Makefile in src/bin/pgaccess, which\nis a problem because src/bin/Makefile tries to run a sub-make in that\ndirectory when configured --with-tcl. Lack of the sub-Makefile looks\nbogus to me; it may not need to do anything for \"make all\" but it sure\nought to do something for \"make install\", no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 1999 21:53:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess seems a tad confused"
},
{
"msg_contents": "> 1. Why did the new pgaccess get installed into REL6_5 branch but not\n> main development branch?\n> \n\nNo sense putting in development because there will probably be a newer\nversion by the time 6.6 is released, no?\n\n> 2. New pgaccess no longer has a Makefile in src/bin/pgaccess, which\n> is a problem because src/bin/Makefile tries to run a sub-make in that\n> directory when configured --with-tcl. Lack of the sub-Makefile looks\n> bogus to me; it may not need to do anything for \"make all\" but it sure\n> ought to do something for \"make install\", no?\n\nYes. I was not in favor of adding new pgaccess in 6.5.2, but was\nout-voted.\n\nI have re-added the Makefile that appeared in the development tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 00:11:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused"
},
{
"msg_contents": " > 1. Why did the new pgaccess get installed into REL6_5 branch but not\n > main development branch?\n > \n\n No sense putting in development because there will probably be a newer\n version by the time 6.6 is released, no?\n\nYes there will be, but it seems to serve two purposes. One is that\nanyone working with the development tree at least has some version of\npgaccess handy. More importantly, though, any bugs in the Makefile or\nwhatever will be noticed early by developers and won't wait until the\nlast moment. Since the basic installation scheme likely doesn't\ndepend much on the exact pgaccess release, there really isn't much to\nbe lost by keeping it in the development tree.\n\nCheers,\nBrook\n",
"msg_date": "Fri, 17 Sep 1999 09:48:03 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused"
},
{
"msg_contents": "> > 1. Why did the new pgaccess get installed into REL6_5 branch but not\n> > main development branch?\n> > \n> \n> No sense putting in development because there will probably be a newer\n> version by the time 6.6 is released, no?\n> \n> Yes there will be, but it seems to serve two purposes. One is that\n> anyone working with the development tree at least has some version of\n> pgaccess handy. More importantly, though, any bugs in the Makefile or\n> whatever will be noticed early by developers and won't wait until the\n> last moment. Since the basic installation scheme likely doesn't\n> depend much on the exact pgaccess release, there really isn't much to\n> be lost by keeping it in the development tree.\n\nIt is in the development tree, just not the most recent version.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 12:44:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> More importantly, though, any bugs in the Makefile or\n>> whatever will be noticed early by developers and won't wait until the\n>> last moment. Since the basic installation scheme likely doesn't\n>> depend much on the exact pgaccess release, there really isn't much to\n>> be lost by keeping it in the development tree.\n\n> It is in the development tree, just not the most recent version.\n\nBut the point is, if the most recent version had been in the development\ntree, we'd have had a better shot at noticing that its makefile was\nmissing...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Sep 1999 17:47:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> More importantly, though, any bugs in the Makefile or\n> >> whatever will be noticed early by developers and won't wait until the\n> >> last moment. Since the basic installation scheme likely doesn't\n> >> depend much on the exact pgaccess release, there really isn't much to\n> >> be lost by keeping it in the development tree.\n> \n> > It is in the development tree, just not the most recent version.\n> \n> But the point is, if the most recent version had been in the development\n> tree, we'd have had a better shot at noticing that its makefile was\n> missing...\n> \n\nI guess. The new pgaccess release was so different than the current\none, I just cvs removed all files, and readded everything. That is how\nMakefile go lost.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 21:35:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "On Fri, 17 Sep 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > >> More importantly, though, any bugs in the Makefile or\n> > >> whatever will be noticed early by developers and won't wait until the\n> > >> last moment. Since the basic installation scheme likely doesn't\n> > >> depend much on the exact pgaccess release, there really isn't much to\n> > >> be lost by keeping it in the development tree.\n> > \n> > > It is in the development tree, just not the most recent version.\n> > \n> > But the point is, if the most recent version had been in the development\n> > tree, we'd have had a better shot at noticing that its makefile was\n> > missing...\n> > \n> \n> I guess. The new pgaccess release was so different than the current\n> one, I just cvs removed all files, and readded everything. That is how\n> Makefile go lost.\n\nEwwww...so, like, we lost the 'history' of the files that only changed vs\nwere new? *raised eyebrow*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:32:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "> > I guess. The new pgaccess release was so different than the current\n> > one, I just cvs removed all files, and readded everything. That is how\n> > Makefile go lost.\n> \n> Ewwww...so, like, we lost the 'history' of the files that only changed vs\n> were new? *raised eyebrow*\n\nYes, thought the pgaccess file was kept I think. I just checked, and\nsomehow the pgaccess files are not in the stable tree anymore, just the\ndirectories.\n\nI got the final version <24 hours from release. It was in my tree, but\nnow it isn't, and it isn't in 6.5.2 either.\n\nI asked for the author to verify my work. I am adding it to the tree\nnow. What do we do?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 18:06:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > > I guess. The new pgaccess release was so different than the current\n> > > one, I just cvs removed all files, and readded everything. That is how\n> > > Makefile go lost.\n> > \n> > Ewwww...so, like, we lost the 'history' of the files that only changed vs\n> > were new? *raised eyebrow*\n> \n> Yes, thought the pgaccess file was kept I think. I just checked, and\n> somehow the pgaccess files are not in the stable tree anymore, just the\n> directories.\n> \n> I got the final version <24 hours from release. It was in my tree, but\n> now it isn't, and it isn't in 6.5.2 either.\n> \n> I asked for the author to verify my work. I am adding it to the tree\n> now. What do we do?\n\nOkay, am very confused here...just did:\n\ncvs checkout -rREL6_5_PATCHES -P pgsql/src/bin/pgaccess\n\nit extracted 243 files...\n\n> cvs status Makefile\n===================================================================\nFile: Makefile Status: Up-to-date\n\n Working revision: 1.1.4.2\n Repository revision: 1.1.4.2 /usr/local/cvsroot/pgsql/src/bin/pgaccess/Makefile,v\n Sticky Tag: REL6_5_PATCHES (branch: 1.1.4)\n Sticky Date: (none)\n Sticky Options: (none)\n\nI just performed the same on the same on the non-'-r' tree, and now see\nwhat you mean :(\n\nI should be able to fix this momentarily...I hope...\n\nIts all a learning less...assuming you know what you've learnt? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Mon, 20 Sep 1999 19:29:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "I am totally confused, but have added the missing files.\n\n\n> On Mon, 20 Sep 1999, Bruce Momjian wrote:\n> \n> > > > I guess. The new pgaccess release was so different than the current\n> > > > one, I just cvs removed all files, and readded everything. That is how\n> > > > Makefile go lost.\n> > > \n> > > Ewwww...so, like, we lost the 'history' of the files that only changed vs\n> > > were new? *raised eyebrow*\n> > \n> > Yes, thought the pgaccess file was kept I think. I just checked, and\n> > somehow the pgaccess files are not in the stable tree anymore, just the\n> > directories.\n> > \n> > I got the final version <24 hours from release. It was in my tree, but\n> > now it isn't, and it isn't in 6.5.2 either.\n> > \n> > I asked for the author to verify my work. I am adding it to the tree\n> > now. What do we do?\n> \n> Okay, am very confused here...just did:\n> \n> cvs checkout -rREL6_5_PATCHES -P pgsql/src/bin/pgaccess\n> \n> it extracted 243 files...\n> \n> > cvs status Makefile\n> ===================================================================\n> File: Makefile Status: Up-to-date\n> \n> Working revision: 1.1.4.2\n> Repository revision: 1.1.4.2 /usr/local/cvsroot/pgsql/src/bin/pgaccess/Makefile,v\n> Sticky Tag: REL6_5_PATCHES (branch: 1.1.4)\n> Sticky Date: (none)\n> Sticky Options: (none)\n> \n> I just performed the same on the same on the non-'-r' tree, and now see\n> what you mean :(\n> \n> I should be able to fix this momentarily...I hope...\n> \n> Its all a learning less...assuming you know what you've learnt? :)\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 18:35:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n> I got the final version <24 hours from release. It was in my tree, but\n> now it isn't, and it isn't in 6.5.2 either.\n> \n> I asked for the author to verify my work. I am adding it to the tree\n> now. What do we do?\n\nEither make a 6.5.3, inline 6.5.2 (6.5.2a, anyone??) or leave 6.5.2 as is. \nNone is ideal -- although a 6.5.3 is better than a badly broken 6.5.2. The\nshort term solution is for those using 6.5.2 to download the pgaccess-0.98\ntarball from flex.ro.\n\nI ran across the depopulated pgaccess tree this morning while starting the\nbuild cycle for the 6.5.2 rpms -- good thing I have already dealt with that\nissue with previous packages. For the RPM's, it has been practice for some time\nto include the very latest pgaccess as a separate tarball, then untarring it\nover top of the one in the main tarball during the package build. I was hoping\nto get away from that. ;-(\n-----------------------------------------------------------------------------\nLamar Owen \nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 20 Sep 1999 18:50:12 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "> On Mon, 20 Sep 1999, Bruce Momjian wrote:\n> > I got the final version <24 hours from release. It was in my tree, but\n> > now it isn't, and it isn't in 6.5.2 either.\n> > \n> > I asked for the author to verify my work. I am adding it to the tree\n> > now. What do we do?\n> \n> Either make a 6.5.3, inline 6.5.2 (6.5.2a, anyone??) or leave 6.5.2 as is. \n> None is ideal -- although a 6.5.3 is better than a badly broken 6.5.2. The\n> short term solution is for those using 6.5.2 to download the pgaccess-0.98\n> tarball from flex.ro.\n> \n> I ran across the depopulated pgaccess tree this morning while starting the\n> build cycle for the 6.5.2 rpms -- good thing I have already dealt with that\n> issue with previous packages. For the RPM's, it has been practice for some time\n> to include the very latest pgaccess as a separate tarball, then untarring it\n> over top of the one in the main tarball during the package build. I was hoping\n> to get away from that. ;-(\n\nYes, I have created a bad situation. pgaccess it very important for pgsql.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 19:03:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > I ran across the depopulated pgaccess tree this morning while starting the\n> > build cycle for the 6.5.2 rpms -- good thing I have already dealt with that\n> > issue with previous packages. For the RPM's, it has been practice for some time\n> > to include the very latest pgaccess as a separate tarball, then untarring it\n> > over top of the one in the main tarball during the package build. I was hoping\n> > to get away from that. ;-(\n> \n> Yes, I have created a bad situation. pgaccess it very important for pgsql.\n\nI wouldn't have even noticed had I not remembered that pgaccess-0.98 was one of\nthe enhancements in 6.5.2. I was looking to rid the RPM's of the extra tarball\nof pgaccess. Had I not noticed, I would have blissfully kept the pgaccess-0.98\ntarball in the RPM, and not gone rabbit-hunting. As it stands, the\npgacess-0.98 tarball is kept in the 6.5.2 RPM, just not blissfully. ;-)\n\nDon't punish yourself too hard -- an honest (if avoidable) mistake.\n\n-----------------------------------------------------------------------------\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 20 Sep 1999 20:40:31 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgaccess seems a tad confused]"
}
] |
[
{
"msg_contents": "I may have missed some discussions, but I want to know if there are any\nplans on implementing something like the hierarchical query that Oracle\nhas?\n\nYou know, this would greatly simplify the task of writing the accounting\nsystem I'm planning :-)\n\n",
"msg_date": "Fri, 17 Sep 1999 08:54:43 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hierarchical query?"
},
{
"msg_contents": "Kaare Rasmussen wrote:\n> \n> I may have missed some discussions, but I want to know if there are any\n> plans on implementing something like the hierarchical query that Oracle\n> has?\n\nIt should be in the TODO under some weird name derived from SQL3 docs ;)\n\n> You know, this would greatly simplify the task of writing the accounting\n> system I'm planning :-)\n\nOne more general approach would be to enable functions to return rowsets\nlike ordinary selects do, then it would be easy to write a function for\nthe above.\n\nCurrently it can be implemented using a function and temp tables, but it\ngets a bit convoluted .\n\n-------\nHannu\n",
"msg_date": "Fri, 17 Sep 1999 13:52:33 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hierarchical query?"
}
] |
[
{
"msg_contents": "\n---------- Forwarded message ----------\nDate: Fri, 17 Sep 1999 17:52:44 +0400 (MSD)\nFrom: Artem Chuprina <[email protected]>\n\nran=> create table test_source (src text);\nCREATE\nran=> insert into test_source values('First distinct');\nINSERT 235913 1\nran=> insert into test_source values('First distinct');\nINSERT 235914 1\nran=> insert into test_source values('Second distinct');\nINSERT 235915 1\nran=> insert into test_source values('Second distinct');\nINSERT 235916 1\nran=> select src from test_source;\nsrc \n---------------\nFirst distinct \nFirst distinct \nSecond distinct\nSecond distinct\n(4 rows)\n\nran=> select distinct src from test_source;\nsrc \n---------------\nFirst distinct \nSecond distinct\n(2 rows)\n\nran=> create sequence seq_test;\nCREATE\nran=> create table test1 (n int default nextval('seq_test'), t text);\nCREATE\nran=> create table test2 (n int, t text);\nCREATE\nran=> insert into test2 (\"t\") select distinct src from test_source;\nINSERT 0 2\nran=> insert into test1 (\"t\") select distinct src from test_source;\nINSERT 0 4\n \nLook here^\n\nran=> select * from test2;\nn|t \n-+---------------\n |First distinct \n |Second distinct\n(2 rows)\n\nran=> select * from test1;\nn|t \n-+---------------\n1|First distinct \n2|First distinct \n3|Second distinct\n4|Second distinct\n(4 rows)\n\nPostgreSQL 6.4.2, PostgreSQL 6.5.1.\n\n-- \nArtem Chuprina E-mail: [email protected]\nNetwork Administrator FIDO: 2:5020/371.32\nPIRIT Corp. Phone: +7(095) 115-7101\n\n\n",
"msg_date": "Fri, 17 Sep 1999 14:38:45 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug"
},
{
"msg_contents": "Oleg Broytmann <[email protected]> writes:\n> ran=> create table test1 (n int default nextval('seq_test'), t text);\n> ran=> insert into test1 (\"t\") select distinct src from test_source;\n> [ doesn't work right ]\n\nMy, that's an interesting case. I think that fits right in with my\nremark yesterday that the SELECT inside an INSERT ... SELECT needs\nto have a targetlist that's separate from the INSERT's list. As it\nstands, we form a targetlist representing the set of values that need\nto be inserted into the target table --- and then the DISTINCT pass\nruns on those tuples :-(, because there is nothing else for it to\nrun on.\n\nIn short, this is not a trivial thing to fix. We need multilevel\nquery trees...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Sep 1999 10:18:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug "
},
{
"msg_contents": "> Oleg Broytmann <[email protected]> writes:\n> > ran=> create table test1 (n int default nextval('seq_test'), t text);\n> > ran=> insert into test1 (\"t\") select distinct src from test_source;\n> > [ doesn't work right ]\n> \n> My, that's an interesting case. I think that fits right in with my\n> remark yesterday that the SELECT inside an INSERT ... SELECT needs\n> to have a targetlist that's separate from the INSERT's list. As it\n> stands, we form a targetlist representing the set of values that need\n> to be inserted into the target table --- and then the DISTINCT pass\n> runs on those tuples :-(, because there is nothing else for it to\n> run on.\n> \n> In short, this is not a trivial thing to fix. We need multilevel\n> query trees...\n\nAdded to TODO:\n\n\t* Allow multi-level query trees for INSERT INTO ... SELECT\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Sep 1999 11:43:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug"
},
{
"msg_contents": "\nIt appears this bug still exists.\n\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 17 Sep 1999 17:52:44 +0400 (MSD)\n> From: Artem Chuprina <[email protected]>\n> \n> ran=> create table test_source (src text);\n> CREATE\n> ran=> insert into test_source values('First distinct');\n> INSERT 235913 1\n> ran=> insert into test_source values('First distinct');\n> INSERT 235914 1\n> ran=> insert into test_source values('Second distinct');\n> INSERT 235915 1\n> ran=> insert into test_source values('Second distinct');\n> INSERT 235916 1\n> ran=> select src from test_source;\n> src \n> ---------------\n> First distinct \n> First distinct \n> Second distinct\n> Second distinct\n> (4 rows)\n> \n> ran=> select distinct src from test_source;\n> src \n> ---------------\n> First distinct \n> Second distinct\n> (2 rows)\n> \n> ran=> create sequence seq_test;\n> CREATE\n> ran=> create table test1 (n int default nextval('seq_test'), t text);\n> CREATE\n> ran=> create table test2 (n int, t text);\n> CREATE\n> ran=> insert into test2 (\"t\") select distinct src from test_source;\n> INSERT 0 2\n> ran=> insert into test1 (\"t\") select distinct src from test_source;\n> INSERT 0 4\n> \n> Look here^\n> \n> ran=> select * from test2;\n> n|t \n> -+---------------\n> |First distinct \n> |Second distinct\n> (2 rows)\n> \n> ran=> select * from test1;\n> n|t \n> -+---------------\n> 1|First distinct \n> 2|First distinct \n> 3|Second distinct\n> 4|Second distinct\n> (4 rows)\n> \n> PostgreSQL 6.4.2, PostgreSQL 6.5.1.\n> \n> -- \n> Artem Chuprina E-mail: [email protected]\n> Network Administrator FIDO: 2:5020/371.32\n> PIRIT Corp. Phone: +7(095) 115-7101\n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 18:12:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> It appears this bug still exists.\n\nYes. I think this cannot be fixed without having a two-level querytree\nstructure for INSERT ... SELECT. The problem is basically that the\nDISTINCT processing is happening on the tuples that are ready to put\ninto the target table (after the 'n' column is added), rather than on\nthe tuples that are coming out of the source table. With only one\ntargetlist there is no way to represent the notion that the DISTINCT\nneeds to happen on just the 't' column.\n\nThis is one of a large number of things waiting for a redesign of\nquerytrees...\n\n\t\t\tregards, tom lane\n\n>> ran=> create table test1 (n int default nextval('seq_test'), t text);\n>>\n>> ran=> insert into test1 (\"t\") select distinct src from test_source;\n>> \n>> ran=> select * from test1;\n>> n|t \n>> -+---------------\n>> 1|First distinct \n>> 2|First distinct \n>> 3|Second distinct\n>> 4|Second distinct\n>> (4 rows)\n",
"msg_date": "Mon, 29 Nov 1999 21:28:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug "
},
{
"msg_contents": "A good while back, Oleg Broytmann <[email protected]> wrote:\n>> ran=> create table test1 (n int default nextval('seq_test'), t text);\n>> ran=> insert into test1 (\"t\") select distinct src from test_source;\n>> [ doesn't work right ]\n\n> My, that's an interesting case. I think that fits right in with my\n> remark yesterday that the SELECT inside an INSERT ... SELECT needs\n> to have a targetlist that's separate from the INSERT's list. As it\n> stands, we form a targetlist representing the set of values that need\n> to be inserted into the target table --- and then the DISTINCT pass\n> runs on those tuples :-(, because there is nothing else for it to\n> run on.\n\nFYI, this now works in current sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Oct 2000 01:57:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug "
}
] |
[
{
"msg_contents": "Ok, the Linux/Alpha patches for Postgresql 6.5.2 are attached to\nthis email (quite small after gzip). The procedure for use is the same as\nfor the 6.5.1 patches. Grab the 6.5.2 tarball, untar/ungzip it, then from\nthe top level directory of the source tree, run 'gzip -dc\n/path/to/patch/postgresql-6.5.2-alpah.patch | patch -p1', and it should\napply cleanly. Then just compile, install, and run as usual.\n\tThe only regression tests that are failing are the sames ones as\nfor 6.5.1, geometry (off by one in nth decimal place) and rules (with\nno-predefined sorting method, alpha's default sort is different than the\nrest). \n\tThe patches are also on my web site, though notice the new URL,\nhttp://www.rkirkpat.net/software/ to go directly to them. Also, if someone\ncould update my email address in the documentation (under Linux/Alpha\nperson, or where ever it shows up) to '[email protected]' (from\[email protected]) that would be great! Thanks.\n\tIf you have any trouble with this patch, feel free to email me,\nbut please try and provide as much detail as possible as what you were\ndoing and what happened. The more detail you give me, the quicker I can\nsolve the problem. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------",
"msg_date": "Sat, 18 Sep 1999 10:32:48 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux/Alpha patches for Postgresql 6.5.2"
}
] |
[
{
"msg_contents": "Ok, the Linux/Alpha patches for Postgresql 6.5.2 are attached to\nthis email (quite small after gzip). The procedure for use is the same as\nfor the 6.5.1 patches. Grab the 6.5.2 tarball, untar/ungzip it, then from\nthe top level directory of the source tree, run 'gzip -dc\n/path/to/patch/postgresql-6.5.2-alpah.patch | patch -p1', and it should\napply cleanly. Then just compile, install, and run as usual.\n\tThe only regression tests that are failing are the sames ones as\nfor 6.5.1, geometry (off by one in nth decimal place) and rules (with\nno-predefined sorting method, alpha's default sort is different than the\nrest). \n\tThe patches are also on my web site, though notice the new URL,\nhttp://www.rkirkpat.net/software/ to go directly to them. Also, if someone\ncould update my email address in the documentation (under Linux/Alpha\nperson, or where ever it shows up) to '[email protected]' (from\[email protected]) that would be great! Thanks.\n\tIf you have any trouble with this patch, feel free to email me,\nbut please try and provide as much detail as possible as what you were\ndoing and what happened. The more detail you give me, the quicker I can\nsolve the problem. TTYL.\n\n\tPS. Sorry for any duplicates of this message, but the lists did\nnot like my new email address until I had unsubscribed and resubscribed\nwith my new email address.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------",
"msg_date": "Sat, 18 Sep 1999 11:11:44 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux/Alpha patches for Postgresql 6.5.2 "
},
{
"msg_contents": "Not its not. It is the 'cost' of the 'parsed' branch trees that are causing the final\ndisplay to be slightly different. If one were to change the size of an 'internal' 'C struct'\nto be the same size of an 'i386' 'C struct' then the results are the same ( or at least make\nthe sizeof() values the same ) then the results would be the same.\n\nas for the differences in the 'off by a small fraction' is due to i386 having 80bit float\noperations, and the alpha does 64bit float operations. ( Both under the same ieee mandate to\ndo the same algorithm)\ngat\n\n\n\n\nRyan Kirkpatrick wrote:\n\n> The only regression tests that are failing are the sames ones as\n> for 6.5.1, geometry (off by one in nth decimal place) and rules (with\n> no-predefined sorting method, alpha's default sort is different than the\n> rest).\n\n\n",
"msg_date": "Sun, 19 Sep 1999 06:34:10 -0400",
"msg_from": "Uncle George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Linux/Alpha patches for Postgresql 6.5.2"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Uncle George wrote:\n\n> Not its not. It is the 'cost' of the 'parsed' branch trees that are\n> causing the final display to be slightly different. If one were to\n> change the size of an 'internal' 'C struct' to be the same size of an\n> 'i386' 'C struct' then the results are the same ( or at least make the\n> sizeof() values the same ) then the results would be the same.\n> \n> as for the differences in the 'off by a small fraction' is due to i386\n> having 80bit float operations, and the alpha does 64bit float\n> operations. ( Both under the same ieee mandate to do the same\n> algorithm) \n> gat\n\n\tThank you for those clarifications.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 20 Sep 1999 19:18:54 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] Linux/Alpha patches for Postgresql 6.5.2"
}
] |
[
{
"msg_contents": ">From: Tatsuo Ishii <[email protected]>\n>\n>Following case statement is legal but fails in 6.5.1.\n>\n>drop table t1;\n>DROP\n>create table t1(i int);\n>CREATE\n>insert into t1 values(-1);\n>INSERT 4047465 1\n>insert into t1 values(0);\n>INSERT 4047466 1\n>insert into t1 values(1);\n>INSERT 4047467 1\n>\n>select i,\n> case\n> when i < 0 then 'minus'\n> when i = 0 then 'zero'\n> when i > 0 then 'plus'\n> else null\n> end\n>from t1;\n>ERROR: Unable to locate type oid 0 in catalog\n\nI'd kept this as an example of case usage and tried it on\nthe latest source, where it worked fine.\n\nThen I tried inserting a NULL into the table, which the\ncase statement then treated as 0 and not null.\n\ninsert into t1 values(null);\nINSERT 150412 1\nselect i,\n case\n when i < 0 then 'minus'\n when i = 0 then 'zero'\n when i > 0 then 'plus'\n else null\n end\nfrom t1;\n i|case\n--+-----\n-1|minus\n 0|zero\n 1|plus\n |zero\n(4 rows) \n\nWas this discussed?\n\nKeith.\n\n",
"msg_date": "Sat, 18 Sep 1999 17:39:46 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] case bug?"
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n> Then I tried inserting a NULL into the table, which the\n> case statement then treated as 0 and not null.\n\nThis is a bug: the test expressions i < 0 etc are actually returning\nNULL, but ExecEvalCase is failing to check for a NULL condition result.\nIt should treat a NULL as false, I expect, just as WHERE does.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 15:37:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] case bug? "
},
{
"msg_contents": "Keith Parks <[email protected]> writes:\n>> Then I tried inserting a NULL into the table, which the\n>> case statement then treated as 0 and not null.\n\n> This is a bug: the test expressions i < 0 etc are actually returning\n> NULL, but ExecEvalCase is failing to check for a NULL condition result.\n> It should treat a NULL as false, I expect, just as WHERE does.\n\nFixed --- here is the patch for REL6_5.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/executor/execQual.c.orig\tSat Jun 12 15:22:40 1999\n--- src/backend/executor/execQual.c\tSat Sep 18 19:28:46 1999\n***************\n*** 1128,1136 ****\n \n \t\t/*\n \t\t * if we have a true test, then we return the result, since the\n! \t\t * case statement is satisfied.\n \t\t */\n! \t\tif (DatumGetInt32(const_value) != 0)\n \t\t{\n \t\t\tconst_value = ExecEvalExpr((Node *) wclause->result,\n \t\t\t\t\t\t\t\t\t econtext,\n--- 1128,1137 ----\n \n \t\t/*\n \t\t * if we have a true test, then we return the result, since the\n! \t\t * case statement is satisfied. A NULL result from the test is\n! \t\t * not considered true.\n \t\t */\n! \t\tif (DatumGetInt32(const_value) != 0 && ! *isNull)\n \t\t{\n \t\t\tconst_value = ExecEvalExpr((Node *) wclause->result,\n \t\t\t\t\t\t\t\t\t econtext,\n",
"msg_date": "Sat, 18 Sep 1999 19:31:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] case bug? "
},
{
"msg_contents": "> Fixed --- here is the patch for REL6_5.\n\nThanks. I'll keep poking at join syntax instead of looking at this...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 05:39:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] case bug?"
}
] |
[
{
"msg_contents": "Needed the following patches to get it to compile on a DS20. It is an\nev6, so it wasn't recognised and one of the defines in s_lock.h was\nwrong.\n\nAdriaan",
"msg_date": "Sat, 18 Sep 1999 20:20:37 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Patches for alpha w. cc"
},
{
"msg_contents": "On Sat, 18 Sep 1999, Adriaan Joubert wrote:\n\n> Needed the following patches to get it to compile on a DS20. It is an\n> ev6, so it wasn't recognised and one of the defines in s_lock.h was\n> wrong.\n\n\tIs this for Tru64 (or OSF) for Alpha or for Linux/Alpha? I only\nhave an XLT Alpha running Linux, so there could be issues for pgsql on\nLinux/Alpha with the newer 21264 chips, hence the reason I ask. Thanks.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Sun, 19 Sep 1999 11:29:14 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patches for alpha w. cc"
}
] |
[
{
"msg_contents": "I have just committed a ton of changes to make heap_open/heap_openr\ntake an additional argument specifying the kind of lock to grab on\nthe relation (or you can say \"NoLock\" to get the old, no-lock behavior).\nSimilarly, heap_close takes a new argument giving the kind of lock to\nrelease, or \"NoLock\" to release no lock. (If you don't release the\nlock you got at open time, it's implicitly held till transaction end.)\n\nThis should go a long way towards fixing problems caused by a concurrent\nVACUUM moving tuples in system relations. I also fixed several bugs\n(first spotted by Hiroshi) having to do with not getting a sufficient\nlock on a relation being DROPped, ALTERed, etc. There may be more of\nthose still lurking, though.\n\nThere are a couple of coding rules that ought to be pointed out:\n\n1. If you specify a lock type to heap_open/openr, then heap_open will\nelog(ERROR) if the relation cannot be found --- since it can't get the\nlock in that case, obviously. You must use NoLock (and then lock later,\nif appropriate) if you want to be able to recover from no-such-relation.\nThis allowed extra code to test for no-such-rel to be removed from many\ncall sites. There were a lot of other call sites that neglected to test\nfor failure return at all, so they're now a little more robust.\n\n2. I made most opens of system relations grab AccessShareLock if\nread-only, or RowExclusiveLock if read-write, on the theory that\nthese accesses correspond to an ordinary search or update of a user\nrelation. This maximizes concurrency of access to the system tables.\nIt should be sufficient to have AccessExclusiveLock on the user relation\nbeing modified in order to do most things like dropping/modifying\nrelations. Note however that we release these locks as soon as we\nclose the system relations, whereas the lock on the underlying relation\nneeds to be held till transaction commit to ensure that other users of\nthe relation will see whatever you did. There may be a few cases where\nwe need a stronger lock on system relations...\n\n3. If you are doing a SearchSysCache (any flavor) to find a tuple that\nyou intend to modify/delete, you must open and lock the containing\nrelation *before* the search, not after. Otherwise you may retrieve\na tuple containing a stale physical location pointer (because VACUUM\nhas just moved it), which will cause the heap_replace or heap_delete\nto crash and burn. There were a few places that did the heap_open\nafter getting the tuple. Naughty naughty.\n\n\nI did not change locking behavior for index_open/close --- for the\nmost part we just acquire AccessShareLock on an index during\nindex_beginscan. I am not sure if this is good or not. I did make\nindex_open do elog(ERROR) on failure, because most call sites weren't\nchecking for failure...\n\nThese changes do *not* include Hiroshi's recent proposal to add\ntime qual checking in SearchSysCache. I suspect he is right but\nwould like confirmation from someone who actually understands the\ntime qual code ;-)\n\n\t\t\tregards, tom lane\n\nPS: you should do \"make clean all\" after pulling these changes.\n",
"msg_date": "Sat, 18 Sep 1999 15:31:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Notice: heap_open/close changes committed"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> 2. I made most opens of system relations grab AccessShareLock if\n> read-only, or RowExclusiveLock if read-write, on the theory that\n ^^^^^^^^^^^^^^^^\n> these accesses correspond to an ordinary search or update of a user\n> relation. This maximizes concurrency of access to the system tables.\n\nThere are problems here. In the case of normal UPDATE/DELETE\n(when RowExclusiveLock is acquired) Executor takes care about\nthe-same-row writers, but other parts of system don't check\nis tuple read being updated concurrent transaction or not.\nThis is the old bug (pre-6.5.X released WRITE lock just after\nsystem table was modified). I had no time to fix it and so\njust changed old WRITE lock with new AccessExclusiveLock.\nBut we have to handle this in proper way (wait if t_xmax\nis id of an active transaction).\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 09:24:31 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Notice: heap_open/close changes committed"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> 2. I made most opens of system relations grab AccessShareLock if\n>> read-only, or RowExclusiveLock if read-write, on the theory that\n> ^^^^^^^^^^^^^^^^\n\n> There are problems here. In the case of normal UPDATE/DELETE\n> (when RowExclusiveLock is acquired) Executor takes care about\n> the-same-row writers, but other parts of system don't check\n> is tuple read being updated concurrent transaction or not.\n\nDrat. I was afraid I might be getting in over my head :-(\n\n> This is the old bug (pre-6.5.X released WRITE lock just after\n> system table was modified). I had no time to fix it and so\n> just changed old WRITE lock with new AccessExclusiveLock.\n\nI do not think changing RowExclusiveLock back to AccessExclusiveLock\nwill fix it unless we hold the lock till end of transaction, no?\nThat seems like much too high a price to pay.\n\n> But we have to handle this in proper way (wait if t_xmax\n> is id of an active transaction).\n\nYes. Where is the code that does this right in the regular executor?\nI will see what needs to be done to make the system table accesses\nact the same.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:00:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Notice: heap_open/close changes committed "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > This is the old bug (pre-6.5.X released WRITE lock just after\n> > system table was modified). I had no time to fix it and so\n> > just changed old WRITE lock with new AccessExclusiveLock.\n> \n> I do not think changing RowExclusiveLock back to AccessExclusiveLock\n> will fix it unless we hold the lock till end of transaction, no?\n\nYes.\n\n> That seems like much too high a price to pay.\n\nThat's why I proposed to use Exclusive lock (it doesn't conflict\nwith AccessShareLock used by readers).\n\n> > But we have to handle this in proper way (wait if t_xmax\n> > is id of an active transaction).\n> \n> Yes. Where is the code that does this right in the regular executor?\n> I will see what needs to be done to make the system table accesses\n> act the same.\n\nSorry - I messed things up: heap_replace/heap_delete wait for\nconcurrent update, but doesn't update/delete modified tuple.\nThey return result code (HeapTupleMayBeUpdated etc in utils/tqual.h)\nand it's up to caller decide what to do if tuple modified by\nconcurrent transaction.\nFor _updated_ tuple TID of new tuple version is also returned\n(if requested)...\n\nExamples of how this is handled/used by Executor are\nin execMain.c (just search for HeapTupleUpdated).\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 23:39:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Notice: heap_open/close changes committed"
}
] |
[
{
"msg_contents": "It seems to me there's no fundamental reason why there couldn't be\ntwo VACUUMs running concurrently in a database. With the locking\nwe are doing now, it should be safe enough. So, I'd like to propose\nthat we get rid of the pg_vlock lock file. It doesn't have any useful\npurpose but it does force manual intervention by the dbadmin to recover\nif a VACUUM crashes :-(\n\nComments? Did I miss something about why we can't have more than one\nvacuum process?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 15:58:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why do we need pg_vlock?"
},
{
"msg_contents": "> It seems to me there's no fundamental reason why there couldn't be\n> two VACUUMs running concurrently in a database. With the locking\n> we are doing now, it should be safe enough. So, I'd like to propose\n> that we get rid of the pg_vlock lock file. It doesn't have any useful\n> purpose but it does force manual intervention by the dbadmin to recover\n> if a VACUUM crashes :-(\n> \n> Comments? Did I miss something about why we can't have more than one\n> vacuum process?\n\nI vote for removal. Lock files are hacks, usually.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 16:25:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why do we need pg_vlock?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > It seems to me there's no fundamental reason why there couldn't be\n> > two VACUUMs running concurrently in a database. With the locking\n> > we are doing now, it should be safe enough. So, I'd like to propose\n> > that we get rid of the pg_vlock lock file. It doesn't have any useful\n> > purpose but it does force manual intervention by the dbadmin to recover\n> > if a VACUUM crashes :-(\n> >\n> > Comments? Did I miss something about why we can't have more than one\n> > vacuum process?\n> \n> I vote for removal. Lock files are hacks, usually.\n\nAgreed.\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 09:25:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why do we need pg_vlock?"
}
] |
[
{
"msg_contents": "I think we need to get rid of setheapoverride().\n\nAs far as I can tell, its purpose is to make tuples created in the\ncurrent transaction's current command be considered valid, whereas\nordinarily they'd not be considered valid until the next command starts.\nBut there is another way to make just-written tuples become valid:\nCommandCounterIncrement(). Looking around, I see that some code is\nusing CommandCounterIncrement() to achieve the same result that other\ncode is using setheapoverride() for.\n\nThe trouble with setheapoverride is that you can turn it off. For\nexample, execMain.c uses the following code to start a SELECT INTO:\n\n intoRelationId = heap_create_with_catalog(intoName,\n tupdesc, RELKIND_RELATION, parseTree->isTemp);\n\n setheapoverride(true);\n\n intoRelationDesc = heap_open(intoRelationId,\n AccessExclusiveLock);\n\n setheapoverride(false);\n\nThe pg_class tuple inserted by heap_create will not be valid in\nthe current command, so we have to do *something* to allow heap_open\nto see it. The problem with the above sequence is that once we do\nsetheapoverride(false), all of a sudden we can't see the tuples inserted\nby heap_create anymore. What happens if we need to see them again\nduring the current command?\n\nAn example where we will actually crash and burn (I believe; haven't\ntried to make it happen) is if an SI Reset message arrives later during\nthe startup of the SELECT INTO, say while we are acquiring read locks\non the source table(s). relcache.c will try to rebuild all the relcache\nentries, and will fail on the intoRelation because it can't see the\npg_class tuple for it.\n\nIt seems to me that a much cleaner and safer implementation is\n\n intoRelationId = heap_create_with_catalog(intoName,\n tupdesc, RELKIND_RELATION, parseTree->isTemp);\n\n /* Start a new command so that we see results of heap_create */\n CommandCounterIncrement();\n\n intoRelationDesc = heap_open(intoRelationId,\n AccessExclusiveLock);\n\nsince this way the tuples still look valid if we look at them again\nlater in the same command.\n\nComments? Anyone know a reason not to get rid of setheapoverride?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 16:45:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "setheapoverride() considered harmful"
},
{
"msg_contents": "> I think we need to get rid of setheapoverride().\n\nI have always wondered what it did. It is in my personal TODO with a\nquestionmark. Never figured out its purpose.\n\n> since this way the tuples still look valid if we look at them again\n> later in the same command.\n> \n> Comments? Anyone know a reason not to get rid of setheapoverride?\n\nYes, please remove it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 17:25:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] setheapoverride() considered harmful"
}
] |
[
{
"msg_contents": "\nJust tried to do a vacuum analyze on a new database, and the backend\nstarted spewing out:\n\nFATAL: pq_endmessage failed: errno=32\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\npq_flush: send() failed: Broken pipe\nFATAL: pq_endmessage failed: errno=32\n\nFresh build of the server, FreeBSD 3.3...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 18 Sep 1999 17:52:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "v6.5.2 vacuum...?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Just tried to do a vacuum analyze on a new database, and the backend\n> started spewing out:\n\n> FATAL: pq_endmessage failed: errno=32\n> pq_flush: send() failed: Broken pipe\n> FATAL: pq_endmessage failed: errno=32\n> pq_flush: send() failed: Broken pipe\n> FATAL: pq_endmessage failed: errno=32\n> pq_flush: send() failed: Broken pipe\n> FATAL: pq_endmessage failed: errno=32\n\nI'm not seeing it here with a REL6_5 build from Thursday. It looks\nsuspiciously like a problem I thought I'd fixed a good while back,\nwherein the backend didn't behave too gracefully if the client\ndisconnected early.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 18:18:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5.2 vacuum...? "
},
{
"msg_contents": "\nOkay, was able to do a subsequent vacuum and vacuum analyze...but, the\npsql was running onthe same machine as the server, so I'm curious as to\nwhy it would have disconnected...?\n\nAlso, side note...\\h vacuum shows that 'vacuum [verbose] analyze' should\nwork, but if you try and run it,it gives an error...error in psql's help,\nor the backend?\n\n\nOn Sat, 18 Sep 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Just tried to do a vacuum analyze on a new database, and the backend\n> > started spewing out:\n> \n> > FATAL: pq_endmessage failed: errno=32\n> > pq_flush: send() failed: Broken pipe\n> > FATAL: pq_endmessage failed: errno=32\n> > pq_flush: send() failed: Broken pipe\n> > FATAL: pq_endmessage failed: errno=32\n> > pq_flush: send() failed: Broken pipe\n> > FATAL: pq_endmessage failed: errno=32\n> \n> I'm not seeing it here with a REL6_5 build from Thursday. It looks\n> suspiciously like a problem I thought I'd fixed a good while back,\n> wherein the backend didn't behave too gracefully if the client\n> disconnected early.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 18 Sep 1999 20:30:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5.2 vacuum...? "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Also, side note...\\h vacuum shows that 'vacuum [verbose] analyze' should\n> work, but if you try and run it,it gives an error...error in psql's help,\n> or the backend?\n\n??? Works for me ... in fact that's how I usually run vacuum:\n\nplay=> vacuum verbose analyze;\nNOTICE: --Relation pg_type--\nNOTICE: Pages 2: Changed 0, Reapped 1, Empty 0, New 0; Tup 114: Vac 1, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 109, MaxLen 109; Re-using: Free/Avail. Space 3076/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_type_typname_index: Pages 2; Tuples 114: Deleted 1. Elapsed 0/0 sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 114: Deleted 1. Elapsed 0/0 sec.\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 6: Changed 0, Reapped 1, Empty 0, New 0; Tup 422: Vac 7, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 97, MaxLen 97; Re-using: Free/Avail. Space 3080/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Index pg_attribute_attrelid_index: Pages 4; Tuples 422: Deleted 7. Elapsed 0/0 sec.\n[ etc etc etc ]\n\nWhat error do you get exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 19:38:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5.2 vacuum...? "
},
{
"msg_contents": "\nOdd...I must have typ'd something wrong the last time, cause now it works\nfor me too *sigh* \n\nOn Sat, 18 Sep 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Also, side note...\\h vacuum shows that 'vacuum [verbose] analyze' should\n> > work, but if you try and run it,it gives an error...error in psql's help,\n> > or the backend?\n> \n> ??? Works for me ... in fact that's how I usually run vacuum:\n> \n> play=> vacuum verbose analyze;\n> NOTICE: --Relation pg_type--\n> NOTICE: Pages 2: Changed 0, Reapped 1, Empty 0, New 0; Tup 114: Vac 1, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 109, MaxLen 109; Re-using: Free/Avail. Space 3076/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> NOTICE: Index pg_type_typname_index: Pages 2; Tuples 114: Deleted 1. Elapsed 0/0 sec.\n> NOTICE: Index pg_type_oid_index: Pages 2; Tuples 114: Deleted 1. Elapsed 0/0 sec.\n> NOTICE: --Relation pg_attribute--\n> NOTICE: Pages 6: Changed 0, Reapped 1, Empty 0, New 0; Tup 422: Vac 7, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 97, MaxLen 97; Re-using: Free/Avail. Space 3080/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> NOTICE: Index pg_attribute_attrelid_index: Pages 4; Tuples 422: Deleted 7. Elapsed 0/0 sec.\n> [ etc etc etc ]\n> \n> What error do you get exactly?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 18 Sep 1999 21:00:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5.2 vacuum...? "
},
{
"msg_contents": "> \n> Okay, was able to do a subsequent vacuum and vacuum analyze...but, the\n> psql was running onthe same machine as the server, so I'm curious as to\n> why it would have disconnected...?\n> \n> Also, side note...\\h vacuum shows that 'vacuum [verbose] analyze' should\n> work, but if you try and run it,it gives an error...error in psql's help,\n> or the backend?\n\nWorked here:\n\n\tvacuum verbose analyze pg_class;\n\n> \n> \n> On Sat, 18 Sep 1999, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > Just tried to do a vacuum analyze on a new database, and the backend\n> > > started spewing out:\n> > \n> > > FATAL: pq_endmessage failed: errno=32\n> > > pq_flush: send() failed: Broken pipe\n> > > FATAL: pq_endmessage failed: errno=32\n> > > pq_flush: send() failed: Broken pipe\n> > > FATAL: pq_endmessage failed: errno=32\n> > > pq_flush: send() failed: Broken pipe\n> > > FATAL: pq_endmessage failed: errno=32\n> > \n> > I'm not seeing it here with a REL6_5 build from Thursday. It looks\n> > suspiciously like a problem I thought I'd fixed a good while back,\n> > wherein the backend didn't behave too gracefully if the client\n> > disconnected early.\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 22:07:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5.2 vacuum...?"
}
] |
[
{
"msg_contents": "While I realize 6.5.2 is now an Official Release (TM), per RedHat's wishes I\nhave limited myself to 6.5.1 for my RPM series until a working upgradeable\nrelease. Hopefully, this is it.\n\nAnnouncing beta-quality, Version 6.5.1, release 0.7lo rpms with UPGRADING!!!\nalso featuring automatic database initialization after installation (if there's\nno database there already, that is). No more wishing for the days of the\npostgresql-data package.\n\nGet the scoop at http://www.ramifordistat.net/postgres. PLEASE PLEASE pound on\nthis release. I have successfully upgraded a virgin RedHat 6.0 machine (NOT\nthe one on which the RPM's were built) from the version 6.4.2 rpms that shipped\nwith RedHat 6.0. The sequence to complete an upgrade is found in\n/usr/doc/postgresql-6.5.1/README.rpm after installation/upgrade.\n\nBinary rpms are available built for RedHat 6.0/Intel. A SRPM is also available,\nas will be RedHat 5.2 RPM's for Intel after Monday or Tuesday. A tested build\non Alpha and Sparc would be most appreciated. Bang on it!\n\nMany many thanks to Oliver Elphick, who had already been down a similar (but\nnot identical) road with Debian, and had already conquered some of the issues --\nand a ready-built script that needed moderate modification for the RedHat\ncontext. Oliver, it is scary how much alike we think -- although I envy you:\nDebian has far fewer restrictions in package scripts than RedHat.\n\nSend bug reports to me, with a CC: to Thomas Lockhart and Jeff Johnson.\n\nLamar Owen\nWGCR Internet Radio\n\nChangelog:\n* Sat Sep 18 1999 Lamar Owen <[email protected]>\n- 0.7lo\n- First stab at integrating modified versions of the Debian migration scripts.\n-- Courtesy Oliver Elphick, Debian package maintainer, PostgreSQL Global\n-- Development Group.\n- /usr/lib/pgsql/backup/pg_dumpall_new -- a modifed pg_dumpall used in the\n-- migration -- modified to work with the older executables.\n- /usr/bin/postgresql-dump -- the migration script.\n- Upgrade strategy:\n--\t1.) %pre for main package saves old package's executables\n--\t2.) the postgresql init script in -server detects PGDATA existence\n--\t and version, notifies user if upgrade is necessary\n--\t3.) Rather than fully automating upgrade, the tools are provided:\n--\t a.) /usr/bin/postgresql_dump\n--\t b.) /usr/lib/pgsql/backup/pg_dumpall_new\n--\t c.) The executables backed up by %pre in /usr/lib/pgsql/backup\n--\t4.) Documentation on RPM differences and upgrades in README.rpm\n--\t5.) A fully automatic upgrade can be facilitated by some more code\n--\t in /etc/rc.d/init.d/postgresql, if desired.\n- added documentation for rpm setup, and upgrade (README.rpm)\n- added newer man pages from Thomas Lockhart\n- Put the PL's in the right place -- /usr/lib/pgsql, not /usr/lib. My error.\n- Added Requires: postgresql = %{version} for all sub packages.\n- Need to reorganize sources in next release, as the current number of source\n-- files is a little large.\n\n* Tue Sep 07 1999 Cristian Gafton <[email protected]>\n- upgraded pgaccess to the latest 0.98 stable version\n- fix braindead pgaccess installation and add pgaccess dosucmenattaion to\n the package containing pgaccess rather than main package\n- add missing templates tp the /usr/lib/pgsql directory\n- added back the PostgreSQL howto (I wish people will STOP removing\n documentation from this package!)\n- get rid of the perl handling overkill (what the hell was that needed for?)\n- \"chkconfig --del\" should be done in the server package, not the main\n package\n- make server packeg own only /etc/rc.d/init.d/postgresql, not the whole\n /etc/rc.d (doh!)\n- don't ship OS2 executable client as documenatation...\n- if we have a -tcl subpackage, make sure that other packages don't need tcl\n anymore by moving tcl-dependent binaries in the -tcl package... [pltcl.so]\n- if we are using /sbin/chkconfig we don't need the /etc/rc.d/rc?.d symlinks\n\n* Sat Sep 4 1999 Jeff Johnson <[email protected]>\n- use _arch not (unknown!) buildarch macro (#4913).\n\n* Fri Aug 20 1999 Jeff Johnson <[email protected]>\n- obsolete postgres-clients (not conflicts).\n\n* Thu Aug 19 1999 Jeff Johnson <[email protected]>\n- add to Red Hat 6.1.\n\n* Wed Aug 11 1999 Lamar Owen <[email protected]>\n- Release 3lo\n- Picked up pgaccess README.\n- Built patch set for rpm versus tarball idiosyncrasies:\n-- munged some paths in the regression tests (_OBJWD_), trigger functions\n-- munged USER for regression tests.\n-- Added perl and python examples -- required patching the shebang to drop\n-- local in /usr/local/bin \n- Changed rc.d level from S99 to S75, as there are a few server daemons that\n-- might actually need to load AFTER pgsql -- AOLserver is an example.\n- config.guess included in server package by default -- used by regress tests.\n- Preliminary test subpackage, containing entire src/test tree.\n- Prebuild of binaries in the test subpackage.\n- Added pgaccess-0.97 beta as /usr/bin/pgaccess97 for testing\n- Removed the DATABASE-HOWTO; it was SO old, and the newer release of it\n-- is a stock part of the RedHat HOWTOS package.\n- Put in the RIGHT postgresql.init ('/etc/rc.d/init.d/postgresql')\n- Noted that the perl client is operational.\n\n* Fri Aug 6 1999 Lamar Owen <[email protected]>\n- Release 2lo\n- Added alpha patches courtesy Ryan Kirkpatrick and Uncle George\n- Renamed lamar owen series of RPMS with release of #lo\n- Put Ramifordistat as vendor and URL for lamar owen RPM series, until non-beta\n-- release coordinated with PGDG.\n\n* Mon Jul 19 1999 Lamar Owen <[email protected]>\n- Correct some file misappropriations:\n-- /usr/lib/pgsql was in wrong package\n-- createlang, destroylang, and vacuumdb now in main package\n-- ipcclean now in server subpackage\n-- The static libraries are now in the devel subpackage\n-- /usr/lib/plpgsql.so and /usr/lib/pltcl.so now in server \n- Cleaned up some historical artifacts for readability -- left references\n- to these artifacts in the changelog\n\n* Sat Jun 19 1999 Thomas Lockhart <[email protected]>\n- deprecate clients rpm, and define a server rpm for the backend\n- version 6.5\n- updated pgaccess to version 0.96\n- build ODBC interface library\n- split tcl and ODBC packages into separate binary rpms\n\n* Sat Apr 17 1999 Jeff Johnson <[email protected]>\n- exclude alpha for Red Hat 6.0.\n\n* Sun Mar 21 1999 Cristian Gafton <[email protected]> \n- auto rebuild in the new build environment (release 2)\n\n* Wed Feb 03 1999 Cristian Gafton <[email protected]>\n- version 6.4.2\n- get rid of the -data package (shipping it was a BAD idea)\n\n* Sat Oct 10 1998 Cristian Gafton <[email protected]>\n- strip all binaries\n- use defattr in all packages\n- updated pgaccess to version 0.90\n- /var/lib/pgsql/pg_pwd should not be 666\n\n* Sun Jun 21 1998 Jeff Johnson <[email protected]>\n- create /usr/lib/pgsql (like /usr/include/pgsql)\n- resurrect libpq++.so*\n- fix name problem in startup-script (problem #533)\n\n* Fri Jun 19 1998 Jeff Johnson <[email protected]>\n- configure had \"--prefix=$RPM_BUILD_ROOT/usr\"\n- move all include files below /usr/include/pgsql.\n- resurrect perl client file lists.\n\n* Tue May 05 1998 Prospector System <[email protected]>\n- translations modified for de, fr, tr\n\n* Tue May 05 1998 Cristian Gafton <[email protected]>\n- build on alpha\n\n* Sat May 02 1998 Cristian Gafton <[email protected]>\n- enhanced initscript\n\n* Tue Apr 21 1998 Cristian Gafton <[email protected]>\n- finally v6.3.2 is here !\n\n* Wed Apr 15 1998 Cristian Gafton <[email protected]>\n- added the include files in the devel package\n\n* Wed Apr 01 1998 Cristian Gafton <[email protected]>\n- finally managed to get a patch for 6.3.1 to make it install corectly. Boy,\n what a mess ! ;-(\n\n* Tue Mar 03 1998 Cristian Gafton <[email protected]>\n- upgraded tp 6.3 release\n\n* Sat Feb 28 1998 Cristian Gafton <[email protected]>\n- upgraded to the latest snapshot\n- splitted yet one more subpackage: clients\n\n* Tue Jan 20 1998 Cristian Gafton <[email protected]>\n- the installed devel-library is no longer stripped (duh!)\n- added the 7 patches found on the ftp.postgresql.org site\n- corrected the -rh patch to patch configure.in rather than configure; we\n now use autoconf\n- added a patch to fix the broken psort function\n- build TCL and C++ libraries as well\n- updated pgaccess to version 0.76\n\n* Thu Oct 23 1997 Cristian Gafton <[email protected]>\n- cleaned up the spec file for version 6.2.1\n- splited devel subpackage\n- added chkconfig support in %preun and %post\n- added optional data package\n\n* Mon Oct 13 1997 Elliot Lee <[email protected]> 6.2-3\n- Fixed lots of bung-ups in the spec file, made it FSSTND compliant, etc.\n- Removed jdbc package, jdk isn't stable yet as far as what goes where.\n- Updated to v 6.2.1\n\n* Thu Oct 9 1997 10:58:14 dan\n- on pre-installation script now the `data' dir is renamed to\n `data.rpmorig' (no more wild deletions!).\n- added `postgresql-jdbc' sub-package.\n- postgresql.sh script: defined function `add_to_path()' and\n changed the location of postgresql.jar in the CLASSPATH.\n\n* Sat Oct 4 1997 10:27:43 dan\n- updated to version 6.2.\n- added auto installation's scripts (pre, post, preun, postun)\n",
"msg_date": "Sat, 18 Sep 1999 22:41:43 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL-6.5.1-0.7lo RPMs available."
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Kent Diskey\" <[email protected]>\nBe sure to reply to that address.\n\nREADING THIS COULD CHANGE YOUR LIFE! \n\n\nI found this \non a bulletin board and decided to try it. A \nlittle \n\nwhile back, I \nwas browsing through newsgroups and came across \nan article similar to this that said you could make \n\nthousands of dollars \nwithin weeks with only an initial investment of \n\n$6.00! So I \nthought, \"Yeah right, this must be a scam\", but \n\nlike most of \nus, I was curious, so I kept reading. Anyway, it \nsaid \n\nthat you send \n$1.00 to each of the 6 names and addresses stated \n\nin the \n\narticle. You then \nplace your name and address in the bottom of \n\nthe list at \n#6, and post the article in at least 200 newsgroups. \n\n\n(There are thousands) \nNo catch, that was it. So after thinking it \n\nover, and talking \nto a few people first, I thought about trying it. \nI \n\nfigured: \"what have \nI got to lose except 6 stamps and $6.00, \n\nright?\" Then I \ninvested the measly $6.00. Well GUESS WHAT!!... \n\nwithin 7 days, \nI started getting money in the mail! I was shocked? \n\nI \n\nfigured it would \nend soon, but the money just kept coming in. In \n\nmy \n\nfirst week, I \nmade about $25.00. By the end of the second week \nI \nhad \n\nmade a total \nof over $1,000.00! In the third week I had over \n\n\n$10,000.00 and it's \nstill growing. This is now my fourth week and I \n\n\nhave made a \ntotal of just over $42,000.00 and it's still coming in \n\n\nrapidly. It's certainly \nworth $6.00, and 6 stamps, I have spent \nmore \n\nthan that on \nthe lottery!! Let me tell you how this works and \nmost \n\nimportantly, why it \nworks... Also, make sure you print a copy of \nthis \n\narticle NOW, so \nyou can get the information off of it as you \nneed \nit. \n\nI promise you \nthat if you follow the directions exactly, that you \n\nwill start making \nmore money that you thought possible by doing \nsomething so easy! \n\n\nSuggestion: Read this \nentire message carefully! (print it out or \n\ndownload it.) Follow \nthe simple directions and watch the money \ncome in! \n\n\nIt's easy. It's \nlegal. And, your investment is only $6.00 (plus \npostage). \n\n\nIMPORTANT: This is \nnot a rip-off; it is not indecent; it is not \n\n\nillegal; and it \nis virtually no risk - it really works!!! \n\n\nIf all of \nthe following instructions are adhered to, you will receive \nextraordinary dividends. \n\nPLEASE NOTE: \n\nPlease follow these \ndirections EXACTLY, and $50,000 or more can \nbe \n\nyours in 20 \nto 60 days. This program remains successful because \nof \n\nthe honesty and \nintegrity of the participants. Please continue its \nsuccess by carefully adhering to the instructions. \n\n\nYou will now \nbecome part of the Mail Order business. In this \nbusiness \n\nyour product is \nnot solid and tangible, it's a service. You are in \n\n\nthe business of \ndeveloping Mailing Lists. Many large corporations \nare \n\nhappy to pay \nbig bucks for quality lists. However, the money made \n\nfrom the mailing \nlists is secondary to the income which is made \nfrom \n\npeople like you \nand me asking to be included in that list. \n\nHere are the 4 easy steps to success: \n\n\nSTEP 1: Get \n6 separate pieces of paper and write the following on \n\n\neach piece of \npaper \"PLEASE PUT ME ON YOUR MAILING LIST.\" \nNow \n\nget 6 US \n$1.00 bills and place ONE inside EACH of the 6 \npieces of \n\npaper so the \nbill will not be seen through the envelope (to prevent \n\n\nthievery). Next, place \none paper in each of the 6 envelopes and \nseal \n\nthem. You should \nnow have 6 sealed envelopes, each with a piece \nof \n\npaper stating the \nabove phrase, your name and address, and a \n$1.00 \n\nbill. What you \nare doing is creating a service. This is absolutely \n\nlegal? You are \nrequesting a legitimate service and you are paying \nfor \n\nit! Like most \nof us I was a little skeptical and a little \nworried \n\nabout the legal \naspects of it all. So I checked it out with \nthe U.S. \n\nPost Office (1-800-725-2161) \nand they confirmed that it is indeed \n\nlegal! Mail the \n6 envelopes to the following addresses: \n\n1.) Bill Lijewski \n807 Keast \nHutchinson, KS 67501, USA \n\n2.) Rodney Fadler \nP.O. Box 244 \nHerculaneum, MO 63048, USA \n\n3.) Kent Diskey \n2114 North tull ave apt#4\nFayetteville, Ar 72704 usa \n\n4.) Robert Sham \n1039 Lamplighter Rd. \nNiskayuna, NY 12309, USA \n\n5.) Vl�ntoiu Gheorghe Serban \nC.P. 72-21 \nBucharest, ROMANIA \n\n6.) Justin Amoyen \n235 Oakridge Dr. \nDaly City, CA 94014, USA \n\n\nSTEP 2: Now \ntake the #1 name off the list that you see \nabove, \nmove \n\nthe other names \nup (6 becomes 5, 5 becomes 4, etc. ...) and \nadd \nYOUR \nname as number 6 on the list. \n\n\nSTEP 3: Change \nanything you need to, but try to keep this article \n\nas \n\nclose to the \noriginal as possible. Now, post your amended article \nto \n\nat least 200 \nnewsgroups. (I think there are close to 24,000 \ngroups.) \n\nAll you need \nis 200, but remember, the more you post, the more \n\nmoney you make! \n\n\nThis is perfectly \nlegal! If you have any doubts, refer to Title 18 \n\n\nSec. 1302 & \n1241 of the Postal lottery laws. Keep a copy of \nthese \nsteps \n\nfor yourself and, \nwhenever you need money, you can use it again, \nand \nagain. \n\n\nPLEASE REMEMBER that \nthis program remains successful because \nof \n\nthe honesty and \nintegrity of the participants and by their \n\ncarefully adhering to \nthe directions. Look at it this way, if \n\nyou are of \nintegrity, the program will continue and the money \n\nthat so many \nothers have received will come your way, too. \n\n\nNOTE: You may \nwant to retain every name and address sent to \nyou, \n\neither on a \ncomputer or hard copy and keep the notes people \nsend \n\nyou. This VERIFIES \nthat you are truly providing a service. (Also, \n\nit might be \na good idea to wrap the $1 bill in dark \npaper to reduce \nthe risk of mail theft.) \n\n\nSo, as each \npost is downloaded and the directions carefully \nfollowed, \n\nsix members will \nbe reimbursed for their participation as a List \n\nDeveloper with one \ndollar each. Your name will move up the list \n\ngeometrically so that \nwhen your name reached the #1 position \nyou \n\nwill be receiving \nthousands of dollars in CASH!!! What an \nopportunity \n\nfor only $6.00 \n($1.00 for each of the first six people listed above). \n\n\nSend it now, \nadd your own name to the list and you're in \nbusiness! \n\n---DIRECTIONS----FOR HOW TO POST TO \nNEWSGROUPS------------------ \n\nSTEP 1: \n\nYou do not \nneed to re-type letter to your own posting. Simply \n\nput your cursor \nat the beginning of this letter and drag your \n\ncursor to the \nbottom of this document, and select 'copy' from \n\nthe edit menu. \nThis will copy the entire letter into the \ncomputer's memory. \n\nSTEP 2: \n\nOpen a blank \n'notepad' file and place your cursor at the top of \n\n\nthe blank page. \n>From the 'edit' menu select 'PASTE'. This will \n\npaste a copy \nof the letter into notepad so that you can add \nyour \n\nname to the \nbottom of the list and to change the numbers of \nthe \nlist. \n\nSTEP 3: \n\nSave your new \nnotepad file as a '.txt' file. If you want to \ndo \n\nyour postings in \ndifferent settings, you'll always have this \nfile to go back to. \n\nSTEP 4: \n\nUse Netscape or \nInternet Explorer and try search for various \n\nnewsgroups (on-line forums, \nmessage boards, chat sites, \ndiscussions). \n\nSTEP 5: \n\nVisit these message \nboards and post this article as a new \nmessage \n\nby highlighting the \ntext of this letter and selecting 'PASTE' \n\nfrom the edit \nmenu. Fill in the Subject, this will be the header \n\n\nthat everyone sees \nas they scroll through the list of postings \n\nin a particular \ngroup, click the post message button. You're done \n\nwith your first \none! Congratulations... That is it! All you \n\nhave to do \nis jump to different newsgroups and post away, after \nyou \n\nget the hang \nof it, it will take about 30 seconds for each \n\nnewsgroup! ** REMEMBER, THE MORE NEWSGROUPS YOU POST \nIN, THE MORE \n\nMONEY YOU WILL \nMAKE!! BUT YOU HAVE TO POST A MINIMUM OF \n200** \n\nThat is it! \nYou will begin receiving money form around the world \nwithin \ndays! You may eventually want to rent a P.O. \nBox due to the large amount \n\nof mail you \nwill receive. If you wish to stay anonymous, you can \n\ninvent \n\na name to \nuse, as long as the postman will deliver it. \n\n**JUST MAKE SURE ALL THE ADDRESSES ARE CORRECT.** \n\nNow the WHY part: \n\nOut of 200 \npostings, say I receive only 5 replies (a very low \n\n\nexample). So then \nI made $5.00 with my name at #6 on the \n\nletter. \n\nNow, each of \nthe 5 persons who just sent me $1.00 make the \n\nMINIMUM 200 \n\npostings, each with \nmy name at #5 and only 5 persons respond to \n\n\neach of the \noriginal 5, this is an additional $25.00 for me. \n\nNow those 25 \neach make 200 MININUM posts with my name at #4 \n\nand only 5 \n\nreplies each. This \nbrings in an additional $125.00. Now, those 125 \n\npersons turn around \nand post the MINIMUM 200 with my name at \n#3 \n\nand receive 5 \nreplies each, I will make an additional $625.00. \n\nOk, now here \nis the fun part, each of those 625 people post \na \nMINIMUM \n\n200 letters with \nmy name at #2 and they receive 5 replies each. \n\nThat \n\njust made me \n$3,125.00!!! Those 3,125 persons will all deliver this \n\nmessage to 200 \nnewsgroups with my name at #1 and if still 5 \n\npersons \n\nper 200 react, \nI will receive an additional $15,625.00!! With an \n\ninvestment of only \n$6.00! AMAZING! When your name is no longer \n\non the list, \nyou just take the latest posting in the newsgroups, \n\nand send out \nanother $6.00 to names on the list, putting your \nname at \n\nnumber 6 again. \nAnd start posting again. The thing to remeber \n\nis: do you \nrealize that thousands of people all over the world are \n\n\njoining the internet \nand reading these articles everyday? JUST \nLIKE YOU \n\nare now!! So, \ncan you afford $6.00 and see if it really works?? \nI \n\nthink so... People \nhave said, \"what if the plan is played out and \nno \n\none sends you \nthe money? So what! What are the chances of \nthat \n\nHappening when there \nare tons of new honest users and new \nhonest \n\npeople who are \njoining the internet and newsgroups everyday and \n\nare willing to \ngive it a try? Anyway, it is only $6.00 for \na chance \n\nat thousands. Estimates \nare at 20,000 to 50,000 new users, every \n\nday, with thousands \nof those joining the actual internet. \n\nRemember, play FAIRLY \nand HONESTLY and this will really work. \nYou \n\nwouldn't want someone \nto cheat you the same way you may be \ncheating!\n\n\n\t\t\t\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Sat, 18 Sep 1999 22:04:14 -0500",
"msg_from": "\"Geocrawler.com\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "money;money;money"
},
{
"msg_contents": "I really don't like people who spam to my mailing lists.\n\n1. Kent Diskey of Northwest Arkansas, you're a fool. Thousands of people\n post this crap every day and nobody ever gets rich from it. Do the\n arithmetic yourself, do you think there are an infinite number of\n people in the world or what? Go to http://ga.to and poke around a\n little.\n2. Geocrawler.com has already been notified that you abused their service.\n3. This note is being copied to Chris at CNCI (which is nwark.net) and I\n hope they don't like spam, either.\n4. You didn't even copy your name into the right slot on this chain\n letter. Did you think by skipping some steps you'd get more money?\n\nChristopher Corke, you might want to look into posting a Terms of Service\non your site somewhere so that we, as well as your customers, can\nunderstand your spam policy. Your name was the first of three on your\ncontact page, which is why you're getting this. Please do something about\nit, OK?\n\nEverybody else, sorry you had to witness this sordid exchange.\n\nBy the way, even though my mailer doesn't want to include headers, the\nheaders do indicate use of Geocrawler's Web interface to post this.\nSheesh, Kent, didn't you notice the text right under the submit button\nthat said they know who you are and not to abuse their service?\n\nOn Sat, 18 Sep 1999, Geocrawler.com wrote:\n\n> This message was sent from Geocrawler.com by \"Kent Diskey\" <[email protected]>\n> Be sure to reply to that address.\n> \n> READING THIS COULD CHANGE YOUR LIFE! \n> \n> \n> I found this \n> on a bulletin board and decided to try it. A \n> little \n> \n> while back, I \n> was browsing through newsgroups and came across \n> an article similar to this that said you could make \n> \n> thousands of dollars \n> within weeks with only an initial investment of \n> \n> $6.00! So I \n> thought, \"Yeah right, this must be a scam\", but \n> \n> like most of \n> us, I was curious, so I kept reading. Anyway, it \n> said \n> \n> that you send \n> $1.00 to each of the 6 names and addresses stated \n> \n> in the \n> \n> article. You then \n> place your name and address in the bottom of \n> \n> the list at \n> #6, and post the article in at least 200 newsgroups. \n> \n> \n> (There are thousands) \n> No catch, that was it. So after thinking it \n> \n> over, and talking \n> to a few people first, I thought about trying it. \n> I \n> \n> figured: \"what have \n> I got to lose except 6 stamps and $6.00, \n> \n> right?\" Then I \n> invested the measly $6.00. Well GUESS WHAT!!... \n> \n> within 7 days, \n> I started getting money in the mail! I was shocked? \n> \n> I \n> \n> figured it would \n> end soon, but the money just kept coming in. In \n> \n> my \n> \n> first week, I \n> made about $25.00. By the end of the second week \n> I \n> had \n> \n> made a total \n> of over $1,000.00! In the third week I had over \n> \n> \n> $10,000.00 and it's \n> still growing. This is now my fourth week and I \n> \n> \n> have made a \n> total of just over $42,000.00 and it's still coming in \n> \n> \n> rapidly. It's certainly \n> worth $6.00, and 6 stamps, I have spent \n> more \n> \n> than that on \n> the lottery!! Let me tell you how this works and \n> most \n> \n> importantly, why it \n> works... Also, make sure you print a copy of \n> this \n> \n> article NOW, so \n> you can get the information off of it as you \n> need \n> it. \n> \n> I promise you \n> that if you follow the directions exactly, that you \n> \n> will start making \n> more money that you thought possible by doing \n> something so easy! \n> \n> \n> Suggestion: Read this \n> entire message carefully! (print it out or \n> \n> download it.) Follow \n> the simple directions and watch the money \n> come in! \n> \n> \n> It's easy. It's \n> legal. And, your investment is only $6.00 (plus \n> postage). \n> \n> \n> IMPORTANT: This is \n> not a rip-off; it is not indecent; it is not \n> \n> \n> illegal; and it \n> is virtually no risk - it really works!!! \n> \n> \n> If all of \n> the following instructions are adhered to, you will receive \n> extraordinary dividends. \n> \n> PLEASE NOTE: \n> \n> Please follow these \n> directions EXACTLY, and $50,000 or more can \n> be \n> \n> yours in 20 \n> to 60 days. This program remains successful because \n> of \n> \n> the honesty and \n> integrity of the participants. Please continue its \n> success by carefully adhering to the instructions. \n> \n> \n> You will now \n> become part of the Mail Order business. In this \n> business \n> \n> your product is \n> not solid and tangible, it's a service. You are in \n> \n> \n> the business of \n> developing Mailing Lists. Many large corporations \n> are \n> \n> happy to pay \n> big bucks for quality lists. However, the money made \n> \n> from the mailing \n> lists is secondary to the income which is made \n> from \n> \n> people like you \n> and me asking to be included in that list. \n> \n> Here are the 4 easy steps to success: \n> \n> \n> STEP 1: Get \n> 6 separate pieces of paper and write the following on \n> \n> \n> each piece of \n> paper \"PLEASE PUT ME ON YOUR MAILING LIST.\" \n> Now \n> \n> get 6 US \n> $1.00 bills and place ONE inside EACH of the 6 \n> pieces of \n> \n> paper so the \n> bill will not be seen through the envelope (to prevent \n> \n> \n> thievery). Next, place \n> one paper in each of the 6 envelopes and \n> seal \n> \n> them. You should \n> now have 6 sealed envelopes, each with a piece \n> of \n> \n> paper stating the \n> above phrase, your name and address, and a \n> $1.00 \n> \n> bill. What you \n> are doing is creating a service. This is absolutely \n> \n> legal? You are \n> requesting a legitimate service and you are paying \n> for \n> \n> it! Like most \n> of us I was a little skeptical and a little \n> worried \n> \n> about the legal \n> aspects of it all. So I checked it out with \n> the U.S. \n> \n> Post Office (1-800-725-2161) \n> and they confirmed that it is indeed \n> \n> legal! Mail the \n> 6 envelopes to the following addresses: \n> \n> 1.) Bill Lijewski \n> 807 Keast \n> Hutchinson, KS 67501, USA \n> \n> 2.) Rodney Fadler \n> P.O. Box 244 \n> Herculaneum, MO 63048, USA \n> \n> 3.) Kent Diskey \n> 2114 North tull ave apt#4\n> Fayetteville, Ar 72704 usa \n> \n> 4.) Robert Sham \n> 1039 Lamplighter Rd. \n> Niskayuna, NY 12309, USA \n> \n> 5.) Vl�ntoiu Gheorghe Serban \n> C.P. 72-21 \n> Bucharest, ROMANIA \n> \n> 6.) Justin Amoyen \n> 235 Oakridge Dr. \n> Daly City, CA 94014, USA \n> \n> \n> STEP 2: Now \n> take the #1 name off the list that you see \n> above, \n> move \n> \n> the other names \n> up (6 becomes 5, 5 becomes 4, etc. ...) and \n> add \n> YOUR \n> name as number 6 on the list. \n> \n> \n> STEP 3: Change \n> anything you need to, but try to keep this article \n> \n> as \n> \n> close to the \n> original as possible. Now, post your amended article \n> to \n> \n> at least 200 \n> newsgroups. (I think there are close to 24,000 \n> groups.) \n> \n> All you need \n> is 200, but remember, the more you post, the more \n> \n> money you make! \n> \n> \n> This is perfectly \n> legal! If you have any doubts, refer to Title 18 \n> \n> \n> Sec. 1302 & \n> 1241 of the Postal lottery laws. Keep a copy of \n> these \n> steps \n> \n> for yourself and, \n> whenever you need money, you can use it again, \n> and \n> again. \n> \n> \n> PLEASE REMEMBER that \n> this program remains successful because \n> of \n> \n> the honesty and \n> integrity of the participants and by their \n> \n> carefully adhering to \n> the directions. Look at it this way, if \n> \n> you are of \n> integrity, the program will continue and the money \n> \n> that so many \n> others have received will come your way, too. \n> \n> \n> NOTE: You may \n> want to retain every name and address sent to \n> you, \n> \n> either on a \n> computer or hard copy and keep the notes people \n> send \n> \n> you. This VERIFIES \n> that you are truly providing a service. (Also, \n> \n> it might be \n> a good idea to wrap the $1 bill in dark \n> paper to reduce \n> the risk of mail theft.) \n> \n> \n> So, as each \n> post is downloaded and the directions carefully \n> followed, \n> \n> six members will \n> be reimbursed for their participation as a List \n> \n> Developer with one \n> dollar each. Your name will move up the list \n> \n> geometrically so that \n> when your name reached the #1 position \n> you \n> \n> will be receiving \n> thousands of dollars in CASH!!! What an \n> opportunity \n> \n> for only $6.00 \n> ($1.00 for each of the first six people listed above). \n> \n> \n> Send it now, \n> add your own name to the list and you're in \n> business! \n> \n> ---DIRECTIONS----FOR HOW TO POST TO \n> NEWSGROUPS------------------ \n> \n> STEP 1: \n> \n> You do not \n> need to re-type letter to your own posting. Simply \n> \n> put your cursor \n> at the beginning of this letter and drag your \n> \n> cursor to the \n> bottom of this document, and select 'copy' from \n> \n> the edit menu. \n> This will copy the entire letter into the \n> computer's memory. \n> \n> STEP 2: \n> \n> Open a blank \n> 'notepad' file and place your cursor at the top of \n> \n> \n> the blank page. \n> >From the 'edit' menu select 'PASTE'. This will \n> \n> paste a copy \n> of the letter into notepad so that you can add \n> your \n> \n> name to the \n> bottom of the list and to change the numbers of \n> the \n> list. \n> \n> STEP 3: \n> \n> Save your new \n> notepad file as a '.txt' file. If you want to \n> do \n> \n> your postings in \n> different settings, you'll always have this \n> file to go back to. \n> \n> STEP 4: \n> \n> Use Netscape or \n> Internet Explorer and try search for various \n> \n> newsgroups (on-line forums, \n> message boards, chat sites, \n> discussions). \n> \n> STEP 5: \n> \n> Visit these message \n> boards and post this article as a new \n> message \n> \n> by highlighting the \n> text of this letter and selecting 'PASTE' \n> \n> from the edit \n> menu. Fill in the Subject, this will be the header \n> \n> \n> that everyone sees \n> as they scroll through the list of postings \n> \n> in a particular \n> group, click the post message button. You're done \n> \n> with your first \n> one! Congratulations... That is it! All you \n> \n> have to do \n> is jump to different newsgroups and post away, after \n> you \n> \n> get the hang \n> of it, it will take about 30 seconds for each \n> \n> newsgroup! ** REMEMBER, THE MORE NEWSGROUPS YOU POST \n> IN, THE MORE \n> \n> MONEY YOU WILL \n> MAKE!! BUT YOU HAVE TO POST A MINIMUM OF \n> 200** \n> \n> That is it! \n> You will begin receiving money form around the world \n> within \n> days! You may eventually want to rent a P.O. \n> Box due to the large amount \n> \n> of mail you \n> will receive. If you wish to stay anonymous, you can \n> \n> invent \n> \n> a name to \n> use, as long as the postman will deliver it. \n> \n> **JUST MAKE SURE ALL THE ADDRESSES ARE CORRECT.** \n> \n> Now the WHY part: \n> \n> Out of 200 \n> postings, say I receive only 5 replies (a very low \n> \n> \n> example). So then \n> I made $5.00 with my name at #6 on the \n> \n> letter. \n> \n> Now, each of \n> the 5 persons who just sent me $1.00 make the \n> \n> MINIMUM 200 \n> \n> postings, each with \n> my name at #5 and only 5 persons respond to \n> \n> \n> each of the \n> original 5, this is an additional $25.00 for me. \n> \n> Now those 25 \n> each make 200 MININUM posts with my name at #4 \n> \n> and only 5 \n> \n> replies each. This \n> brings in an additional $125.00. Now, those 125 \n> \n> persons turn around \n> and post the MINIMUM 200 with my name at \n> #3 \n> \n> and receive 5 \n> replies each, I will make an additional $625.00. \n> \n> Ok, now here \n> is the fun part, each of those 625 people post \n> a \n> MINIMUM \n> \n> 200 letters with \n> my name at #2 and they receive 5 replies each. \n> \n> That \n> \n> just made me \n> $3,125.00!!! Those 3,125 persons will all deliver this \n> \n> message to 200 \n> newsgroups with my name at #1 and if still 5 \n> \n> persons \n> \n> per 200 react, \n> I will receive an additional $15,625.00!! With an \n> \n> investment of only \n> $6.00! AMAZING! When your name is no longer \n> \n> on the list, \n> you just take the latest posting in the newsgroups, \n> \n> and send out \n> another $6.00 to names on the list, putting your \n> name at \n> \n> number 6 again. \n> And start posting again. The thing to remeber \n> \n> is: do you \n> realize that thousands of people all over the world are \n> \n> \n> joining the internet \n> and reading these articles everyday? JUST \n> LIKE YOU \n> \n> are now!! So, \n> can you afford $6.00 and see if it really works?? \n> I \n> \n> think so... People \n> have said, \"what if the plan is played out and \n> no \n> \n> one sends you \n> the money? So what! What are the chances of \n> that \n> \n> Happening when there \n> are tons of new honest users and new \n> honest \n> \n> people who are \n> joining the internet and newsgroups everyday and \n> \n> are willing to \n> give it a try? Anyway, it is only $6.00 for \n> a chance \n> \n> at thousands. Estimates \n> are at 20,000 to 50,000 new users, every \n> \n> day, with thousands \n> of those joining the actual internet. \n> \n> Remember, play FAIRLY \n> and HONESTLY and this will really work. \n> You \n> \n> wouldn't want someone \n> to cheat you the same way you may be \n> cheating!\n> \n> \n> \t\t\t\n> \n> Geocrawler.com - The Knowledge Archive\n> \n> ************\n> \n> \n\n",
"msg_date": "Sat, 18 Sep 1999 22:38:40 -0500 (EST)",
"msg_from": "\"J. Michael Roberts\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] money;money;money -- spam;spam;spam"
}
] |
[
{
"msg_contents": "\nMorning...\n\n\tThis weekend, up at a clients site working with them on improving\ndatabase performance. They are currently running MySQL and I'm trying to\nconvince them to switch over to PostgreSQL, for various features that they\njust don't have with MySQL...\n\n\tOne of the 'queries' that they are currently doing with MySQL\nconsists of two queries that I can reduce down to one using subqueries,\nbut its so slow that its ridiculous...so I figured I'd throw it out as a\nproblem to hopefully solve?\n\n\tThe query I'm starting out with is works out as:\n\n SELECT id, name, url \\\n FROM aecCategory \\\n WHERE ppid='$indid' \\\n AND pid='$divid'\";\n\n\tThe results of this get fed into a while look that takes the id\nreturned and pushes them into:\n\n\t SELECT distinct b.indid, b.divid, b.catid, a.id, a.mid \\\n FROM aecEntMain a, aecWebEntry b \\\n WHERE (a.id=b.id AND a.mid=b.mid) \\\n AND (a.status like 'active%' and b.status like 'active%')\n AND (a.status like '%active:ALL%' and b.status like '%active:ALL%')\n AND (a.representation like '%:ALL%')\n AND (b.indid='$indid' and b.divid='$divid' and b.catid='$catid')\";\n\n\tNow, I can/have rewritten this as:\n\nSELECT id, name, url \n FROM aecCategory \n WHERE ppid='$indid' \n AND pid='$divid' \n AND id IN ( \nSELECT distinct c.id \n FROM aecEntMain a, aecWebEntry b, aecCategory c \n WHERE (a.id=b.id AND a.mid=b.mid and b.catid=c.id) \n AND (a.status like 'active%' and b.status like 'active%') \n AND (a.status like '%active:ALL%' and b.status like '%active:ALL%') \n AND (a.representation like '%:ALL%') \n AND (b.indid='$indid' and b.divid='$divid' and b.catid IN ( \n SELECT id FROM aecCategory WHERE ppid='$indid' AND pid='$divid' ) \n ));\";\n\n\tAn explain of the above shows:\n\nIndex Scan using aeccategory_primary on aeccategory (cost=8.28 rows=1 width=36)\n SubPlan\n -> Unique (cost=1283.70 rows=21 width=72)\n -> Sort (cost=1283.70 rows=21 width=72)\n -> Nested Loop (cost=1283.70 rows=21 width=72)\n -> Nested Loop (cost=1280.70 rows=1 width=60)\n -> Index Scan using aecwebentry_primary on aecwebentry b \n (cost=1278.63 rows=1 width=36)\n SubPlan\n -> Index Scan using aeccategory_primary on aeccategory \n (cost=8.28 rows=1 width=12)\n -> Index Scan using aecentmain_primary on aecentmain a \n (cost=2.07 rows=348 width=24)\n -> Index Scan using aeccategory_id on aeccategory c \n (cost=3.00 rows=1170 width=12)\n\n\tNow, a few things bother me with the above explain output, based on me \nhopefully reading this right...\n\n\tThe innermost SubPlan reports an estimated rows returned of 1...the \nactual query returns 59 rows...slightly off?\n\n\tThe one that bothers me is the one that reports 1170 rows returned...if you\nlook at the query, the only thing that would/should use aeccategory_id is the\nline that goes \"SELECT distinct c.id\"...if I run just that section of the \nquery, it yields a result of 55 rows...way off??\n\n\tAll of my queries are currently on static data, after a vacuum analyze has \nbeen performed...everything is faster if I split things up and do a SELECT\non a per id basis on return values, but, as the list of 'ids' grow, the\nnumber of iterations of the while loop required will slow down the query...\n\n\tI'm not sure what else to look at towards optimizing the query further,\nor is this something that we still are/need to look at in the server itself?\n\n\tThe machine we are working off of right now is an idle Dual-PIII 450Mhz with\n512Meg of RAM, very fast SCSI hard drives on a UW controller...and that query\nis the only thing running while we test things...so we aren't under-powered :)\n\n\tideas? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 19 Sep 1999 02:29:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "[6.5.2] join problems ..."
}
] |
[
{
"msg_contents": "One idea, which takes into account the thought that moving the admin commands\nout of /usr/bin is a good thing, but moving them into /usr/sbin is bad, and\nwe want to keep it simple for new people.\n\nHows about the commands are stored in ~postgres (or whateber you are using\nas an admin account). This is obviously configurable with --admin-dir in the\nconfigure script.\n\nThis would probably work as:\n\na ) new admins that arent familiar with a system will likely have . in the\nuser paths, thus the commands will work\n\nb ) experienced admins can just choose where to install things\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Sun, 19 Sep 1999 07:36:08 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Command Locations (was Re: HISTORY for 6.5....)"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Michael Simms wrote:\n> One idea, which takes into account the thought that moving the admin commands\n> out of /usr/bin is a good thing, but moving them into /usr/sbin is bad, and\n> we want to keep it simple for new people.\n> \n> Hows about the commands are stored in ~postgres (or whateber you are using\n> as an admin account). This is obviously configurable with --admin-dir in the\n> configure script.\n\nUnder RedHat, ~postgres is /var/lib/pgsql, not the _obvious_ /home/postgres. \nNo, there needs to be a particular place for such commands. And /usr/sbin is\nTHE FSSTND-mandated place (now called the FHS -- www.pathname.com/fhs). Quoting\nFHS 2.0:\n---------------------------------------\nFilesystem Hierarchy Standard\n\n4.7 /usr/sbin : Non-essential standard system binaries\n\nThis directory contains any non-essential binaries used exclusively by the system \nadministrator. System administration programs that are required for system\nrepair, system recovery, mounting /usr, or other essential functions should be\nplaced in /sbin instead. \n\nTypically, /usr/sbin contains networking daemons, any non-essential administration t\nools, and binaries for non-critical server programs. These server programs are used \nwhen entering the System V states known as \"run level 2\" (multi-user state)\nand \"run level 3\" (networked state) or the BSD state known as \"multi-user\nmode\". At this point the system is making services available to users (e.g.,\nprinter support) and to other hosts (e.g., NFS exports). \n\n-----------------------\nNow, looking into my /usr/sbin, I find two owners -- root, and uucp. That's\nright -- most of the uucp stuff that is not executed except during daemon-time\n(uucico, uuxqt, and friends) is in /usr/sbin. Making the database service\navailable in \"multi-user\" mode is a good job for a binary in /usr/sbin.\n\nNow, this is only if PostgreSQL is being installed in an FHS-compliant manner. \nOtherwise, make a /usr/local/pgsql/sbin.\n\nIt might be useful to provide an FHS-compliant configure option (hey, it would\nmake it easier for us packagers ;-)).\n\n> a ) new admins that arent familiar with a system will likely have . in the\n> user paths, thus the commands will work\n\nWhoa. Hold on. Having '.' in PATH is a _major_ security hole. It is almost\nnever a good idea for '.' to be on the PATH. If you want to go the ~postgres\nroute, make a bin or sbin dir under ~postgres, and add '~postgres/bin' to PATH\nin .profile.\n\nIMHO.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sun, 19 Sep 1999 15:17:29 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Command Locations (was Re: HISTORY for 6.5....)"
},
{
"msg_contents": "> One idea, which takes into account the thought that moving the admin commands\n> out of /usr/bin is a good thing, but moving them into /usr/sbin is bad, and\n> we want to keep it simple for new people.\n\nI assume this is only an RPM discussion. All third party stuff should\ngo in /usr/local I think.\n\n\n> \n> Hows about the commands are stored in ~postgres (or whateber you are using\n> as an admin account). This is obviously configurable with --admin-dir in the\n> configure script.\n> \n> This would probably work as:\n> \n> a ) new admins that arent familiar with a system will likely have . in the\n> user paths, thus the commands will work\n> \n> b ) experienced admins can just choose where to install things\n> \n> \t\t\t\t\t\t~Michael\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 19 Sep 1999 16:54:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Command Locations (was Re: HISTORY for 6.5....)"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Bruce Momjian wrote:\n> > One idea, which takes into account the thought that moving the admin commands\n> > out of /usr/bin is a good thing, but moving them into /usr/sbin is bad, and\n> > we want to keep it simple for new people.\n> \n> I assume this is only an RPM discussion. All third party stuff should\n> go in /usr/local I think.\n\nYou assume partially correctly -- it started out that way. In the meantime,\nthe general issue of administrative commands versus user commands arose, and\nwith it, the administrative man page thingy. \n\nI state things the way I do because RedHat is shipping PostgreSQL as a piece of\nbona-fide Systems software -- fundamentally part of their distribution. This\nchanges all the rules -- turns them on end, in reality. While a FHS-compliant\nlinux distribution that does not ship PostgreSQL would need PostgreSQL in\n/usr/local (after all, an OS upgrade could destroy it if it's installed\nelsewhere), RedHat needs it in the FHS-mandated locations, because PostgreSQL\nis part of RedHat's OS. And, I am attempting to maintain a peice that is\nshipping as part of their OS -- which gives me a slightly different point of\nview from other PostgreSQL developers/maintainers.\n\nMore information about the RedHat-ized PostgreSQL install locations is at:\nhttp://www.ramifordistat.net/postgres/build-it/README.rpm.postgresql-6.5.1\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sun, 19 Sep 1999 17:17:18 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Command Locations (was Re: HISTORY for 6.5....)"
},
{
"msg_contents": "> I state things the way I do because RedHat is shipping PostgreSQL as a piece of\n> bona-fide Systems software -- fundamentally part of their distribution. This\n> changes all the rules -- turns them on end, in reality. While a FHS-compliant\n> linux distribution that does not ship PostgreSQL would need PostgreSQL in\n> /usr/local (after all, an OS upgrade could destroy it if it's installed\n> elsewhere), RedHat needs it in the FHS-mandated locations, because PostgreSQL\n> is part of RedHat's OS. And, I am attempting to maintain a peice that is\n> shipping as part of their OS -- which gives me a slightly different point of\n> view from other PostgreSQL developers/maintainers.\n\nSure, makes sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 19 Sep 1999 20:02:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Command Locations (was Re: HISTORY for 6.5....)"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Bruce Momjian wrote:\n\n> > One idea, which takes into account the thought that moving the admin commands\n> > out of /usr/bin is a good thing, but moving them into /usr/sbin is bad, and\n> > we want to keep it simple for new people.\n> \n> I assume this is only an RPM discussion. All third party stuff should\n> go in /usr/local I think.\n\nI can't agree more...I like things going into /usr/local/pgsql by\ndefault...\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:40:45 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Command Locations (was Re: HISTORY for 6.5....)"
}
] |
[
{
"msg_contents": "The query you've presented is rather convoluted, but\nif I'm reading your query correctly, it should reduce\nto a simple, three-way join:\n\nSELECT c.id, c.name, c.url \nFROM aecEntMain a, aecWebEntry b, aecCategory c\nWHERE a.status LIKE 'active:ALL%'\nAND a.representation LIKE '%:ALL%'\nAND b.status LIKE 'active:ALL%'\nAND b.indid='$indid' \nAND b.divid='$divid' \nAND (a.id,a.mid = b.id,b.mid) \nAND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid);\n\nwith the following indexes:\naecEntMain: (status) and (id,mid)\naecWebEntry: (status), (indid), (divid), and\n (catid,indid,divid)\naecCategory: (id,ppid,pid)\n\nNow, there are some differences between the above and\nwhat you wrote. For example, the above requires that \nthe status begins with 'active:ALL'. Your query \nrequires the status begin with 'active' and must also\ncontain the pattern 'active:ALL'. So for the above \nto be equivalent, you can't have a status such as\n'active <some stuff> active:ALL'.\n\nWith respect to subqueries and PostgreSQL, as you \nknow, the IN clause requires a nested scan. If you\nare going to use subqueries, correlated subqueries\nusing EXISTS clauses can use indexes:\n\nSELECT c.id, c.name, c.url \nFROM aecCategory c\nWHERE EXISTS (\nSELECT a.status \nFROM aecEntMain a, aecWebEntry b\nWHERE a.status LIKE 'active:ALL%'\nAND a.representation LIKE '%:ALL%'\nAND b.status LIKE 'active:ALL%'\nAND b.indid='$indid' \nAND b.divid='$divid' \nAND (a.id,a.mid = b.id,b.mid) \nAND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid));\n\nUnfortunately, the lack of index support in IN\nsubqueries affects more than just the IN subquery \nclause, since INTERSECT/EXCEPT uses the rewriter to\nrewrite such queries as UNIONS of two queries with\nan IN/NOT IN subquery, respectively. This makes the\nINTERSECT/EXCEPT feature functionally useless except\non very small tables.\n\nHope that helps (and is equivalent),\n\nMike Mascari ([email protected])\n\n--- The Hermit Hacker <[email protected]> wrote:\n> \n> Morning...\n> \n> \tThis weekend, up at a clients site working with\n> them on improving\n> database performance. They are currently running\n> MySQL and I'm trying to\n> convince them to switch over to PostgreSQL, for\n> various features that they\n> just don't have with MySQL...\n> \n> \tOne of the 'queries' that they are currently doing\n> with MySQL\n> consists of two queries that I can reduce down to\n> one using subqueries,\n> but its so slow that its ridiculous...so I figured\n> I'd throw it out as a\n> problem to hopefully solve?\n> \n> \tThe query I'm starting out with is works out as:\n> \n> SELECT id, name, url \\\n> FROM aecCategory \\\n> WHERE ppid='$indid' \\\n> AND pid='$divid'\";\n> \n> \tThe results of this get fed into a while look that\n> takes the id\n> returned and pushes them into:\n> \n> \t SELECT distinct b.indid, b.divid, b.catid,\n> a.id, a.mid \\\n> FROM aecEntMain a, aecWebEntry b \\\n> WHERE (a.id=b.id AND a.mid=b.mid) \\\n> AND (a.status like 'active%' and b.status\n> like 'active%')\n> AND (a.status like '%active:ALL%' and\n> b.status like '%active:ALL%')\n> AND (a.representation like '%:ALL%')\n> AND (b.indid='$indid' and\n> b.divid='$divid' and b.catid='$catid')\";\n> \n> \tNow, I can/have rewritten this as:\n> \n> SELECT id, name, url \n> FROM aecCategory \n> WHERE ppid='$indid' \n> AND pid='$divid' \n> AND id IN ( \n> SELECT distinct c.id \n> FROM aecEntMain a, aecWebEntry b, aecCategory c \n> WHERE (a.id=b.id AND a.mid=b.mid and b.catid=c.id) \n> AND (a.status like 'active%' and b.status like\n> 'active%') \n> AND (a.status like '%active:ALL%' and b.status\n> like '%active:ALL%') \n> AND (a.representation like '%:ALL%') \n> AND (b.indid='$indid' and b.divid='$divid' and\n> b.catid IN ( \n> SELECT id FROM aecCategory WHERE\n> ppid='$indid' AND pid='$divid' ) \n> ));\";\n> \n> \tAn explain of the above shows:\n> \n> Index Scan using aeccategory_primary on aeccategory \n> (cost=8.28 rows=1 width=36)\n> SubPlan\n> -> Unique (cost=1283.70 rows=21 width=72)\n> -> Sort (cost=1283.70 rows=21 width=72)\n> -> Nested Loop (cost=1283.70 rows=21\n> width=72)\n> -> Nested Loop (cost=1280.70 rows=1\n> width=60)\n> -> Index Scan using aecwebentry_primary\n> on aecwebentry b \n> (cost=1278.63 rows=1 width=36)\n> SubPlan\n> -> Index Scan using\n> aeccategory_primary on aeccategory \n> (cost=8.28 rows=1\n> width=12)\n> -> Index Scan using aecentmain_primary on\n> aecentmain a \n> (cost=2.07 rows=348 width=24)\n> -> Index Scan using aeccategory_id on\n> aeccategory c \n> (cost=3.00 rows=1170 width=12)\n> \n> \tNow, a few things bother me with the above explain\n> output, based on me \n> hopefully reading this right...\n> \n> \tThe innermost SubPlan reports an estimated rows\n> returned of 1...the \n> actual query returns 59 rows...slightly off?\n> \n> \tThe one that bothers me is the one that reports\n> 1170 rows returned...if you\n> look at the query, the only thing that would/should\n> use aeccategory_id is the\n> line that goes \"SELECT distinct c.id\"...if I run\n> just that section of the \n> query, it yields a result of 55 rows...way off??\n> \n> \tAll of my queries are currently on static data,\n> after a vacuum analyze has \n> been performed...everything is faster if I split\n> things up and do a SELECT\n> on a per id basis on return values, but, as the list\n> of 'ids' grow, the\n> number of iterations of the while loop required will\n> slow down the query...\n> \n> \tI'm not sure what else to look at towards\n> optimizing the query further,\n> or is this something that we still are/need to look\n> at in the server itself?\n> \n> \tThe machine we are working off of right now is an\n> idle Dual-PIII 450Mhz with\n> 512Meg of RAM, very fast SCSI hard drives on a UW\n> controller...and that query\n> is the only thing running while we test things...so\n> we aren't under-powered :)\n> \n> \tideas? \n> \n> Marc G. Fournier ICQ#7615664 \n> IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Sun, 19 Sep 1999 00:17:24 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
},
{
"msg_contents": "On Sun, 19 Sep 1999, Mike Mascari wrote:\n\n> SELECT c.id, c.name, c.url \n> FROM aecEntMain a, aecWebEntry b, aecCategory c\n> WHERE a.status LIKE 'active:ALL%'\n> AND a.representation LIKE '%:ALL%'\n> AND b.status LIKE 'active:ALL%'\n> AND b.indid='$indid' \n> AND b.divid='$divid' \n> AND (a.id,a.mid = b.id,b.mid) \n> AND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid);\n\nOnly point I'd like to make (thanks for all the details, gives me alot to\nwork with...) is that the above is not valid in PostgreSQL, it seems...I\nchanged the last two AND lines to be:\n\nAND (a.id=b.id AND a.mid=b.mid) \nAND (b.catid=c.id AND b.indid=c.ppid AND b.divid=c.pid)\n\nand it eliminiated the error, but gave me zero results...\n\nPlease note, in my own defence...I'm working on cleaning up a mess created\nby someone else using MySQL...what has to be done is a cleanup of the\ntables themselves, but trying to fix some of the SQL first appears to be\nthe \"route of least resistance\" :( Or, at least, it appeared to\nbe...starting to change my mind on that one heavily :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 19 Sep 1999 10:49:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
},
{
"msg_contents": "<snip>\n> > AND (a.id,a.mid = b.id,b.mid)\n> > AND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid);\n> ... the above is not valid in PostgreSQL, it seems...\n\nI have to resort to looking at gram.y for this, since I currently have\nthe Postgres parser in bits and pieces all over the garage floor ;)\n\nThe expressions are *almost* valid for Postgres. The difference is\nthat you need to put parens around each side of the \"row expression\":\n\n | '(' row_descriptor ')' row_op '(' row_descriptor ')'\n {\n $$ = makeRowExpr($4, $2, $6);\n }\n ;\n\nI had implemented this using Date and Darwen as a reference, and afaik\nthe SQL standard (and any sensible parser) *requires* parens around\nthe row expression, referred to in gram.y as a \"row descriptor\".\n\nSo, the following should work:\n\n AND ((a.id,a.mid) = (b.id,b.mid))\n AND ((b.catid,b.indid,b.divid) = (c.id,c.ppid,c.pid));\n\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 14:54:47 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
},
{
"msg_contents": "> With respect to subqueries and PostgreSQL, as you \n> know, the IN clause requires a nested scan. If you\n> are going to use subqueries, correlated subqueries\n> using EXISTS clauses can use indexes:\n> \n> SELECT c.id, c.name, c.url \n> FROM aecCategory c\n> WHERE EXISTS (\n> SELECT a.status \n> FROM aecEntMain a, aecWebEntry b\n> WHERE a.status LIKE 'active:ALL%'\n> AND a.representation LIKE '%:ALL%'\n> AND b.status LIKE 'active:ALL%'\n> AND b.indid='$indid' \n> AND b.divid='$divid' \n> AND (a.id,a.mid = b.id,b.mid) \n> AND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid));\n> \n> Unfortunately, the lack of index support in IN\n> subqueries affects more than just the IN subquery \n> clause, since INTERSECT/EXCEPT uses the rewriter to\n> rewrite such queries as UNIONS of two queries with\n> an IN/NOT IN subquery, respectively. This makes the\n> INTERSECT/EXCEPT feature functionally useless except\n> on very small tables.\n\nYes, we are aware of that IN limitation, and I keep trying to get it\nfixed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 19 Sep 1999 16:55:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
},
{
"msg_contents": "\nComparing my original query against yours, idle machine:\n\nMine: 0.000u 0.023s 0:07.78 0.2% 48+132k 0+0io 0pf+0w (55 rows)\nYour: 0.006u 0.018s 0:12.16 0.0% 408+904k 0+0io 0pf+0w (55 rows)\n\nTakes longer to run, less CPU resources, but, if I'm reading this right,\nmore memory resources?\n\nOn Sun, 19 Sep 1999, Mike Mascari wrote:\n\n> The query you've presented is rather convoluted, but\n> if I'm reading your query correctly, it should reduce\n> to a simple, three-way join:\n> \n> SELECT c.id, c.name, c.url \n> FROM aecEntMain a, aecWebEntry b, aecCategory c\n> WHERE a.status LIKE 'active:ALL%'\n> AND a.representation LIKE '%:ALL%'\n> AND b.status LIKE 'active:ALL%'\n> AND b.indid='$indid' \n> AND b.divid='$divid' \n> AND (a.id,a.mid = b.id,b.mid) \n> AND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid);\n> \n> with the following indexes:\n> aecEntMain: (status) and (id,mid)\n> aecWebEntry: (status), (indid), (divid), and\n> (catid,indid,divid)\n> aecCategory: (id,ppid,pid)\n> \n> Now, there are some differences between the above and\n> what you wrote. For example, the above requires that \n> the status begins with 'active:ALL'. Your query \n> requires the status begin with 'active' and must also\n> contain the pattern 'active:ALL'. So for the above \n> to be equivalent, you can't have a status such as\n> 'active <some stuff> active:ALL'.\n> \n> With respect to subqueries and PostgreSQL, as you \n> know, the IN clause requires a nested scan. If you\n> are going to use subqueries, correlated subqueries\n> using EXISTS clauses can use indexes:\n> \n> SELECT c.id, c.name, c.url \n> FROM aecCategory c\n> WHERE EXISTS (\n> SELECT a.status \n> FROM aecEntMain a, aecWebEntry b\n> WHERE a.status LIKE 'active:ALL%'\n> AND a.representation LIKE '%:ALL%'\n> AND b.status LIKE 'active:ALL%'\n> AND b.indid='$indid' \n> AND b.divid='$divid' \n> AND (a.id,a.mid = b.id,b.mid) \n> AND (b.catid,b.indid,b.divid = c.id,c.ppid,c.pid));\n> \n> Unfortunately, the lack of index support in IN\n> subqueries affects more than just the IN subquery \n> clause, since INTERSECT/EXCEPT uses the rewriter to\n> rewrite such queries as UNIONS of two queries with\n> an IN/NOT IN subquery, respectively. This makes the\n> INTERSECT/EXCEPT feature functionally useless except\n> on very small tables.\n> \n> Hope that helps (and is equivalent),\n> \n> Mike Mascari ([email protected])\n> \n> --- The Hermit Hacker <[email protected]> wrote:\n> > \n> > Morning...\n> > \n> > \tThis weekend, up at a clients site working with\n> > them on improving\n> > database performance. They are currently running\n> > MySQL and I'm trying to\n> > convince them to switch over to PostgreSQL, for\n> > various features that they\n> > just don't have with MySQL...\n> > \n> > \tOne of the 'queries' that they are currently doing\n> > with MySQL\n> > consists of two queries that I can reduce down to\n> > one using subqueries,\n> > but its so slow that its ridiculous...so I figured\n> > I'd throw it out as a\n> > problem to hopefully solve?\n> > \n> > \tThe query I'm starting out with is works out as:\n> > \n> > SELECT id, name, url \\\n> > FROM aecCategory \\\n> > WHERE ppid='$indid' \\\n> > AND pid='$divid'\";\n> > \n> > \tThe results of this get fed into a while look that\n> > takes the id\n> > returned and pushes them into:\n> > \n> > \t SELECT distinct b.indid, b.divid, b.catid,\n> > a.id, a.mid \\\n> > FROM aecEntMain a, aecWebEntry b \\\n> > WHERE (a.id=b.id AND a.mid=b.mid) \\\n> > AND (a.status like 'active%' and b.status\n> > like 'active%')\n> > AND (a.status like '%active:ALL%' and\n> > b.status like '%active:ALL%')\n> > AND (a.representation like '%:ALL%')\n> > AND (b.indid='$indid' and\n> > b.divid='$divid' and b.catid='$catid')\";\n> > \n> > \tNow, I can/have rewritten this as:\n> > \n> > SELECT id, name, url \n> > FROM aecCategory \n> > WHERE ppid='$indid' \n> > AND pid='$divid' \n> > AND id IN ( \n> > SELECT distinct c.id \n> > FROM aecEntMain a, aecWebEntry b, aecCategory c \n> > WHERE (a.id=b.id AND a.mid=b.mid and b.catid=c.id) \n> > AND (a.status like 'active%' and b.status like\n> > 'active%') \n> > AND (a.status like '%active:ALL%' and b.status\n> > like '%active:ALL%') \n> > AND (a.representation like '%:ALL%') \n> > AND (b.indid='$indid' and b.divid='$divid' and\n> > b.catid IN ( \n> > SELECT id FROM aecCategory WHERE\n> > ppid='$indid' AND pid='$divid' ) \n> > ));\";\n> > \n> > \tAn explain of the above shows:\n> > \n> > Index Scan using aeccategory_primary on aeccategory \n> > (cost=8.28 rows=1 width=36)\n> > SubPlan\n> > -> Unique (cost=1283.70 rows=21 width=72)\n> > -> Sort (cost=1283.70 rows=21 width=72)\n> > -> Nested Loop (cost=1283.70 rows=21\n> > width=72)\n> > -> Nested Loop (cost=1280.70 rows=1\n> > width=60)\n> > -> Index Scan using aecwebentry_primary\n> > on aecwebentry b \n> > (cost=1278.63 rows=1 width=36)\n> > SubPlan\n> > -> Index Scan using\n> > aeccategory_primary on aeccategory \n> > (cost=8.28 rows=1\n> > width=12)\n> > -> Index Scan using aecentmain_primary on\n> > aecentmain a \n> > (cost=2.07 rows=348 width=24)\n> > -> Index Scan using aeccategory_id on\n> > aeccategory c \n> > (cost=3.00 rows=1170 width=12)\n> > \n> > \tNow, a few things bother me with the above explain\n> > output, based on me \n> > hopefully reading this right...\n> > \n> > \tThe innermost SubPlan reports an estimated rows\n> > returned of 1...the \n> > actual query returns 59 rows...slightly off?\n> > \n> > \tThe one that bothers me is the one that reports\n> > 1170 rows returned...if you\n> > look at the query, the only thing that would/should\n> > use aeccategory_id is the\n> > line that goes \"SELECT distinct c.id\"...if I run\n> > just that section of the \n> > query, it yields a result of 55 rows...way off??\n> > \n> > \tAll of my queries are currently on static data,\n> > after a vacuum analyze has \n> > been performed...everything is faster if I split\n> > things up and do a SELECT\n> > on a per id basis on return values, but, as the list\n> > of 'ids' grow, the\n> > number of iterations of the while loop required will\n> > slow down the query...\n> > \n> > \tI'm not sure what else to look at towards\n> > optimizing the query further,\n> > or is this something that we still are/need to look\n> > at in the server itself?\n> > \n> > \tThe machine we are working off of right now is an\n> > idle Dual-PIII 450Mhz with\n> > 512Meg of RAM, very fast SCSI hard drives on a UW\n> > controller...and that query\n> > is the only thing running while we test things...so\n> > we aren't under-powered :)\n> > \n> > \tideas? \n> > \n> > Marc G. Fournier ICQ#7615664 \n> > IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:27:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
}
] |
[
{
"msg_contents": "Thanks guys, that was swift work.\n\nKeith.\n\n>From: Thomas Lockhart <[email protected]>\n\n>\n>> Fixed --- here is the patch for REL6_5.\n>\n>Thanks. I'll keep poking at join syntax instead of looking at this...\n>\n> - Thomas\n>\n\n",
"msg_date": "Sun, 19 Sep 1999 10:43:17 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] case bug?"
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Tim Perdue\" <[email protected]>\nBe sure to reply to that address.\n\nSorry about the spam that came from my site, Geocrawler. I have revoked the user's rights.\n\nHere is the info that Kent registered with, if anyone wishes to take this further with his ISP:\n\nKent Diskey \[email protected]\n208.136.254.134\nspringdale-058.nwark.net\n19990918215911pm\n\nI hate spammers and go to great lengths to protect email addresses and mail lists from it, but nothing is foolproof when you are dealing with \"these types of people\".\n\nTim Perdue\[email protected]\t\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Sun, 19 Sep 1999 08:04:53 -0500",
"msg_from": "\"Geocrawler.com\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Spam"
}
] |
[
{
"msg_contents": "\nUsing the exact same data, and the exact same queries (dbi is cool):\n\nMySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\nPgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n\nThe main query that appears to be \"dog slow\" is:\n\n SELECT distinct b.indid, b.divid, b.catid, a.id, a.mid \\\n FROM aecEntMain a, aecWebEntry b \\\n WHERE (a.id=b.id AND a.mid=b.mid) \\\n AND (a.status like 'active%' and b.status like 'active%')\n AND (a.status like '%active:ALL%' and b.status like '%active:ALL%')\n AND (a.representation like '%:ALL%')\n AND (b.indid=? and b.divid=? and b.catid=?)\";\n\nWhere, unfortunately, getting rid of those LIKE comparisons will be next to\nimpossible in the short time...\n\n>From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\nmore CPU to do this...so where is our slowdown? Obviously it isn't a lack\nof CPU...all else is equal...hardware wise, both are running on the same \nmachine.\n\nIf I get rid of the three lines above that deal with LIKE, the results\nare:\n\nMySQL: 0.497u 0.168s 0:01.48 43.9% 9+1519k 0+0io 0pf+0w\nPgSQL: 0.504u 0.052s 0:17.81 3.0% 10+1608k 0+0io 0pf+0w\n\nSo, blaming things on the LIKE conditions is totally inappropriate...\n\nAnd looking at the EXPLAIN of the above, I have enough indices:\n\nNOTICE: QUERY PLAN:\n\nUnique (cost=1271.15 rows=5 width=84)\n -> Sort (cost=1271.15 rows=5 width=84)\n -> Nested Loop (cost=1271.15 rows=5 width=84)\n -> Index Scan using aecwebentry_primary on aecwebentry b (cost=1269.08 rows=1 width=60)\n -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=16560 width=24)\n\nEXPLAIN\n\nI'm starting the server as:\n\n#!/bin/tcsh\nsetenv POSTMASTER /usr/local/db/pgsql/bin/postmaster\nrm /tmp/.s.P*\n${POSTMASTER} -o \"-F -o /usr/local/db/pgsql/errout -S 32768\" \\\n -i -p 5432 -D/usr/local/db/pgsql/data -B 256 &\n\nSo I think I'm dedicating *more* then enough resources to the server, no?\n\nAgain, this data is static...hasn't changed for either database since we \nloaded it yesterday...a vacuum analyze has been done on the PostgreSQL \ndatabase, but we haven't done anything with the MySQL one (no vacuum, no\nspecial run parameters)\n\nI'm going to be working with this company towards cleaning up the table\nstructures over the next little while, with an eye towards moving it to\nPostgreSQL, but, all things considered equal except for the DB software\nitself...how is it that we are *so* much slower?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 19 Sep 1999 11:39:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "> Using the exact same data, and the exact same queries (dbi is cool):\n> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> >From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> more CPU to do this...so where is our slowdown?\n\nI don't remember if you gave details on the sizes of tables, but in\nany case I'm going to guess that you are spending almost all of your\ntime in the optimizer. Try manipulating the parameters to force the\ngenetic optimizer and see if it helps. Lots of quals but only two\ntables gives you a non-optimal case for the default exhaustive\noptimizer.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 15:07:42 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Thomas Lockhart wrote:\n\n> > Using the exact same data, and the exact same queries (dbi is cool):\n> > MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> > PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> > >From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> > more CPU to do this...so where is our slowdown?\n> \n> I don't remember if you gave details on the sizes of tables, but in\n> any case I'm going to guess that you are spending almost all of your\n> time in the optimizer. Try manipulating the parameters to force the\n> genetic optimizer and see if it helps. Lots of quals but only two\n> tables gives you a non-optimal case for the default exhaustive\n> optimizer.\n\nWith default GEQO == 11 relations:\n\t0.506u 0.045s 0:19.51 2.7% 10+1596k 0+0io 0pf+0w\nWith GEQO == 2 relations:\n\t0.522u 0.032s 0:19.47 2.8% 9+1385k 0+0io 0pf+0w\n\nIf I use that big SUBSELECT that I posted earlier, with GEQO==2:\n\t0.005u 0.020s 0:07.84 0.2% 120+486k 0+0io 0pf+0w\nAnd with GEQO==11:\n\t0.008u 0.016s 0:07.83 0.1% 144+556k 0+0io 0pf+0w\n\nSo, going with one large SELECT call with two SUBSELECTs in it cuts off\n12secs, but its a web application, and we're still talking 5 seconds\nresponse slower...and alot less CPU being used, which is nice...\n\nBut I'm trying to compare apples->apples as much as possible, and MySQL\nwon't allow us to do that large SUBSELECT call...gives errors, so I'm\nguessing its unsupported...\n\nOther ideas, or am I stuck with accepting 7secs? (Realizing that as each\nnew release comes out, that 7secs tends to have a habit of dropping with\nall the optimizations and cleans up we do to the server itself) If so,\nthen I'm going to have to spend time trying to fix the tables themselves\nbefore delving into switching over to PostgreSQL...which hurts :(\n\t\nOkay, table sizes for the data are:\n\naecCategory == 1170\nTable = aeccategory\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| ppid | varchar() not null default '' | 6 |\n| pid | varchar() not null default '' | 6 |\n| id | varchar() not null default '' | 6 |\n| name | varchar() not null default '' | 255 |\n| description | varchar() | 255 |\n| url | varchar() | 255 |\n| comidsrc | int4 | 4 |\n| datelast | timestamp | 4 |\n+----------------------------------+----------------------------------+-------+\nIndices: aeccategory_id\n aeccategory_primary\n\naecEntMain == 16560\nTable = aecentmain\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| id | varchar() not null default '' | 6 |\n| mid | char() not null default '' | 2 |\n| name | varchar() not null default '' | 200 |\n| description | text | var |\n| url | varchar() | 255 |\n| street | varchar() | 255 |\n| city | varchar() | 255 |\n| state | varchar() | 255 |\n| postal | varchar() | 255 |\n| country | varchar() | 255 |\n| servarea | varchar() | 255 |\n| business | varchar() | 255 |\n| representation | varchar() | 255 |\n| status | varchar() | 255 |\n| datecreate | varchar() | 14 |\n| whocreate | varchar() | 255 |\n| datelast | timestamp | 4 |\n| wholast | varchar() | 255 |\n+----------------------------------+----------------------------------+-------+\nIndices: aecentmain_entityname\n aecentmain_primary\n\naecWebEntry == 58316\nTable = aecwebentry\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| indid | varchar() not null default '' | 6 |\n| divid | varchar() not null default '' | 6 |\n| catid | varchar() not null default '' | 6 |\n| id | varchar() not null default '' | 6 |\n| mid | char() not null default '' | 2 |\n| webdetid | int4 | 4 |\n| status | varchar() | 255 |\n| datecreate | varchar() | 14 |\n| whocreate | varchar() | 255 |\n| datelast | timestamp | 4 |\n| wholast | varchar() | 255 |\n+----------------------------------+----------------------------------+-------+\nIndex: aecwebentry_primary\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 19 Sep 1999 12:48:06 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> more CPU to do this...so where is our slowdown?\n\nIt's gotta be going into I/O, obviously. (I hate profilers that can't\ncount disk accesses...) My guess is that the index scans are losing\nbecause they wind up touching too many disk pages. You show\n\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=1271.15 rows=5 width=84)\n> -> Sort (cost=1271.15 rows=5 width=84)\n> -> Nested Loop (cost=1271.15 rows=5 width=84)\n> -> Index Scan using aecwebentry_primary on aecwebentry b (cost=1269.08 rows=1 width=60)\n> -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=16560 width=24)\n> \n> EXPLAIN\n\nwhich means this should be a great plan if the optimizer is guessing\nright about the selectivity of the index scans: it's estimating only\none tuple returned from the aecwebentry scan, hence only one iteration\nof the nested scan over aecentmain, which it is estimating will yield\nonly five output tuples to be sorted and uniquified.\n\nI am betting these estimates are off rather badly :-(. The indexscans\nare probably hitting way more pages than the optimizer guessed they will.\n\nIt may just be that I have optimizer on the brain from having spent too\nmuch time looking at it, but this smells to me like bad-plan-resulting-\nfrom-bad-selectivity-estimation syndrome. Perhaps I can fix it for 6.6\nas a part of the optimizer cleanups I am doing. I'd like to get as much\ninfo as I can about the test case.\n\nHow many tuples *does* your test query produce, anyway? If you\neliminate all the joining WHERE-clauses and just consider the\nrestriction clauses for each of the tables, how many tuples?\nIn other words, what do you get from\n\n SELECT count(*)\n FROM aecEntMain a\n WHERE (a.id=??? AND a.mid=???)\n AND (a.status like 'active%')\n AND (a.status like '%active:ALL%')\n AND (a.representation like '%:ALL%');\n\n SELECT count(*)\n FROM aecWebEntry b\n WHERE (b.status like 'active%')\n AND (b.status like '%active:ALL%')\n AND (b.indid=? and b.divid=? and b.catid=?);\n\n(In the first of these, substitute a representative id/mid pair from\ntable b for the ???, to simulate what will happen in any one iteration\nof the inner scan over table a.) Also, how many rows in each table?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 11:53:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL? "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Using the exact same data, and the exact same queries (dbi is cool):\n>> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n>> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n>>>> From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n>> more CPU to do this...so where is our slowdown?\n\n> I don't remember if you gave details on the sizes of tables, but in\n> any case I'm going to guess that you are spending almost all of your\n> time in the optimizer.\n\nNo --- if he were, it'd be all CPU time, not 2.7% CPU usage. The time's\ngot to be going into disk accesses. I'm perfectly prepared to blame\nthe optimizer, but I think it's because of a bad plan not too much time\nspent making the plan...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 12:03:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL? "
},
{
"msg_contents": ">>> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n>>> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n\n> No --- if he were, it'd be all CPU time, not 2.7% CPU usage.\n\nEr, wait a second. Are we measuring backend-process runtime here,\nor is that the result of 'time' applied to a *client* ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 13:34:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL? "
},
{
"msg_contents": "> >>> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> >>> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> > No --- if he were, it'd be all CPU time, not 2.7% CPU usage.\n> Er, wait a second. Are we measuring backend-process runtime here,\n> or is that the result of 'time' applied to a *client* ?\n\nRight. That was my point; unless he is firing up the backend using\n\"time\", or running it standalone, it is hard to measure real CPU\ntime...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 20 Sep 1999 03:04:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> The Hermit Hacker <[email protected]> writes:\n> > MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> > PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> > From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> > more CPU to do this...so where is our slowdown?\n> \n> It's gotta be going into I/O, obviously. (I hate profilers that can't\n> count disk accesses...) My guess is that the index scans are losing\n> because they wind up touching too many disk pages. You show\n> \n\nOn that particular machine that can be verified easily, I hope.\n(there seems to be enough RAM). You can simply issue 10 to 100 such\nqueries in a row. Hopefully after the first query all needed info \nwill be in a disk cache, so the rest queries will not draw info from\ndisk. That will be a clean experiment.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 20 Sep 1999 18:50:12 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Leon wrote:\n\n> Tom Lane wrote:\n> > \n> > The Hermit Hacker <[email protected]> writes:\n> > > MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> > > PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> > > From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> > > more CPU to do this...so where is our slowdown?\n> > \n> > It's gotta be going into I/O, obviously. (I hate profilers that can't\n> > count disk accesses...) My guess is that the index scans are losing\n> > because they wind up touching too many disk pages. You show\n> > \n> \n> On that particular machine that can be verified easily, I hope.\n> (there seems to be enough RAM). You can simply issue 10 to 100 such\n> queries in a row. Hopefully after the first query all needed info \n> will be in a disk cache, so the rest queries will not draw info from\n> disk. That will be a clean experiment.\n\nWith the server started as:\n\n${POSTMASTER} -o \"-F -o /usr/local/db/pgsql/errout -S 32768\" \\\n -i -p 5432 -D/usr/local/db/pgsql/data -B 256 &\n\nAnd with me being the only person on that system running against the\nPostgreSQL database (ie. I don't believe the SI invalidation stuff comes\ninto play?), the time to run is the exact same each time:\n\n1st run: 0.488u 0.056s 0:16.34 3.2% 10+1423k 0+0io 0pf+0w\n2nd run: 0.500u 0.046s 0:16.34 3.3% 10+1517k 0+0io 0pf+0w\n3rd run: 0.496u 0.049s 0:16.33 3.2% 9+1349k 0+0io 0pf+0w\n4th run: 0.487u 0.056s 0:16.32 3.2% 14+1376k 0+0io 0pf+0w\n\nNote that the results fed back are *exactly* the same each time...the\ndata is static, as its purely a test database...\n\nI believe that I have the buffers set \"Abnormally high\", as well as have\nprovided more then sufficient sort buffer space...\n\nUsing the 'optimized' query, that uses subselects, the runs are similar:\n\n1st run: 0.467u 0.031s 0:08.26 5.9% 15+1345k 0+0io 0pf+0w\n2nd run: 0.475u 0.023s 0:08.29 5.9% 15+1384k 0+0io 0pf+0w\n3rd run: 0.468u 0.031s 0:08.28 5.9% 10+1325k 0+0io 0pf+0w\n4th run: 0.461u 0.031s 0:08.31 5.8% 10+1362k 0+0io 0pf+0w\n\nTime is cut in half, CPU usage goes up a bit...but all runs are pretty\nmuch the same...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 17:27:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Tom Lane wrote:\n\n> How many tuples *does* your test query produce, anyway? If you\n\nDepends on what it is fed...could be 270 records returned, could be\n5...depends on the values of catid, indid and divid...\n\n> eliminate all the joining WHERE-clauses and just consider the\n> restriction clauses for each of the tables, how many tuples?\n> In other words, what do you get from\n> \n> SELECT count(*)\n> FROM aecEntMain a\n> WHERE (a.id=??? AND a.mid=???)\n> AND (a.status like 'active%')\n> AND (a.status like '%active:ALL%')\n> AND (a.representation like '%:ALL%');\n\nReturns 1 ...\n\n> SELECT count(*)\n> FROM aecWebEntry b\n> WHERE (b.status like 'active%')\n> AND (b.status like '%active:ALL%')\n> AND (b.indid=? and b.divid=? and b.catid=?);\n\nThis one I get 39 ...\n\n> (In the first of these, substitute a representative id/mid pair from\n> table b for the ???, to simulate what will happen in any one iteration\n> of the inner scan over table a.) Also, how many rows in each table?\n\naec=> select count(*) from aecEntMain;\ncount\n-----\n16560\n(1 row)\n\naec=> select count(*) from aecWebEntry;\ncount\n-----\n58316\n(1 row)\n\nBy doing a 'select distinct id from aecWebEntry', there are 16416 distinct\nid's in aecWebEntry, and 16493 distinct id's in aecEntMain, so I'm\nguessing that its supposed to be a 1->N relationship between the two\ntables...therefore, again, I'm guessing, but the first query above shoudl\nnever return more then 1 record...\n\nIf I run both queries together, as:\n SELECT distinct b.indid, b.divid, b.catid, a.id, a.mid\n FROM aecEntMain a, aecWebEntry b\n WHERE (a.id=b.id AND a.mid=b.mid)\n AND (a.status like 'active%' and b.status like 'active%')\n AND (a.status like '%active:ALL%' and b.status like '%active:ALL%')\n AND (a.representation like '%:ALL%')\n AND (b.indid='000001' and b.divid='100016' and b.catid='100300');\n\nThe result, in this case, is 39 records...if I change b.catid to be '100400',\nits only 35 records, etc...\n\nDoes this help? The server isn't live, so if you want me to enable some\ndebugging, or play with something, its not going to affect anything...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n\n\n\n",
"msg_date": "Mon, 20 Sep 1999 18:16:40 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "\nAnyone get a chance to look into this?\n\nOn Sun, 19 Sep 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> > PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> > From the 'time' numbers, MySQL is running ~17sec faster, but uses up 23%\n> > more CPU to do this...so where is our slowdown?\n> \n> It's gotta be going into I/O, obviously. (I hate profilers that can't\n> count disk accesses...) My guess is that the index scans are losing\n> because they wind up touching too many disk pages. You show\n> \n> > NOTICE: QUERY PLAN:\n> > \n> > Unique (cost=1271.15 rows=5 width=84)\n> > -> Sort (cost=1271.15 rows=5 width=84)\n> > -> Nested Loop (cost=1271.15 rows=5 width=84)\n> > -> Index Scan using aecwebentry_primary on aecwebentry b (cost=1269.08 rows=1 width=60)\n> > -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=16560 width=24)\n> > \n> > EXPLAIN\n> \n> which means this should be a great plan if the optimizer is guessing\n> right about the selectivity of the index scans: it's estimating only\n> one tuple returned from the aecwebentry scan, hence only one iteration\n> of the nested scan over aecentmain, which it is estimating will yield\n> only five output tuples to be sorted and uniquified.\n> \n> I am betting these estimates are off rather badly :-(. The indexscans\n> are probably hitting way more pages than the optimizer guessed they will.\n> \n> It may just be that I have optimizer on the brain from having spent too\n> much time looking at it, but this smells to me like bad-plan-resulting-\n> from-bad-selectivity-estimation syndrome. Perhaps I can fix it for 6.6\n> as a part of the optimizer cleanups I am doing. I'd like to get as much\n> info as I can about the test case.\n> \n> How many tuples *does* your test query produce, anyway? If you\n> eliminate all the joining WHERE-clauses and just consider the\n> restriction clauses for each of the tables, how many tuples?\n> In other words, what do you get from\n> \n> SELECT count(*)\n> FROM aecEntMain a\n> WHERE (a.id=??? AND a.mid=???)\n> AND (a.status like 'active%')\n> AND (a.status like '%active:ALL%')\n> AND (a.representation like '%:ALL%');\n> \n> SELECT count(*)\n> FROM aecWebEntry b\n> WHERE (b.status like 'active%')\n> AND (b.status like '%active:ALL%')\n> AND (b.indid=? and b.divid=? and b.catid=?);\n> \n> (In the first of these, substitute a representative id/mid pair from\n> table b for the ???, to simulate what will happen in any one iteration\n> of the inner scan over table a.) Also, how many rows in each table?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 22 Sep 1999 17:54:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Anyone get a chance to look into this?\n\nOnly just now, but I do have a couple of thoughts.\n\nFor the query\n\n SELECT distinct b.indid, b.divid, b.catid, a.id, a.mid \\\n FROM aecEntMain a, aecWebEntry b \\\n WHERE (a.id=b.id AND a.mid=b.mid) \\\n AND (a.status like 'active%' and b.status like 'active%')\n AND (a.status like '%active:ALL%' and b.status like '%active:ALL%')\n AND (a.representation like '%:ALL%')\n AND (b.indid=? and b.divid=? and b.catid=?)\";\n\nyou're showing a plan of \n\nUnique (cost=1271.15 rows=5 width=84)\n -> Sort (cost=1271.15 rows=5 width=84)\n -> Nested Loop (cost=1271.15 rows=5 width=84)\n -> Index Scan using aecwebentry_primary on aecwebentry b (cost=1269.08 rows=1 width=60)\n -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=16560 width=24)\n\nwhich indicates that the optimizer is guessing only one match in\naecwebentry and is therefore putting it on the outside of the nested\nloop (so that the inner scan over aecentmain would only have to be\ndone once, if it's guessing right). But in a later message you\nsay that the actual number of hits is more like 39 for aecwebentry\nand one for aecentmain. Which means that the nested loop would go\nfaster if it were done the other way round, aecentmain on the outside.\nI'm not sure of a way to force the system to try it that way, though.\n\nThe other question is why is it using a nested loop at all, rather\nthan something more intelligent like merge or hash join. Presumably\nthe optimizer thinks those would be more expensive, but it might be\nwrong.\n\nYou could try forcing selection of merge and hash joins for this\nquery and see (a) what kind of plan do you get, (b) how long does\nit really take? To do that, start psql with PGOPTIONS environment\nvariable set:\n\nPGOPTIONS=\"-fn -fh\"\t# forbid nestloop and hash, ie, force mergejoin\n\nPGOPTIONS=\"-fn -fm\"\t# forbid nestloop and merge, ie, force hashjoin\n\nAlso, I don't think you ever mentioned exactly what the available\nindexes are on these tables?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 18:29:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL? "
},
{
"msg_contents": "\nOkay, after playing around with this some more tonight, and playing with\nthe PGOPTIONS you've presented...I've gotten the query to be faster then\nwith mysql :) The error of my ways: not enough indices *sigh* I created a\nfew more on the fields that were being used on the query, and have:\n\nSELECT c.id, c.name, c.url\nFROM aecCategory c\nWHERE EXISTS (\nSELECT a.status\nFROM aecEntMain a, aecWebEntry b\nWHERE a.status LIKE 'active:ALL%'\nAND a.representation LIKE '%:ALL%'\nAND b.status LIKE 'active:ALL%'\nAND b.indid='000001'\nAND b.divid='100016'\nAND ((a.id,a.mid) = (b.id,b.mid))\nAND ((b.catid,b.indid,b.divid) = (c.id,c.ppid,c.pid)));\n\n==========\nSeq Scan on aeccategory c (cost=69.61 rows=1170 width=36)\n SubPlan\n -> Nested Loop (cost=4.10 rows=1 width=60)\n -> Index Scan using aecwebentry_divid on aecwebentry b (cost=2.03 rows=1 width=24)\n -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=480 width=36)\n===========\n\nproducing the results I need in 1.26seconds, using 1.5% of the CPU.\n\nNow, something does bother me here, and I'm not sure if its a problem we\nneed to address, or if its expected, but if I remove the index\naecwebentry_divid, it reverts to using aecwebentry_primary and increases\nthe query time to 12secs, which is:\n\ncreate unique index aecWebEntry_primary on aecWebEntry ( indid,divid,catid,id,mid);\n\nShould it do that?\n\nOn Wed, 22 Sep 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Anyone get a chance to look into this?\n> \n> Only just now, but I do have a couple of thoughts.\n> \n> For the query\n> \n> SELECT distinct b.indid, b.divid, b.catid, a.id, a.mid \\\n> FROM aecEntMain a, aecWebEntry b \\\n> WHERE (a.id=b.id AND a.mid=b.mid) \\\n> AND (a.status like 'active%' and b.status like 'active%')\n> AND (a.status like '%active:ALL%' and b.status like '%active:ALL%')\n> AND (a.representation like '%:ALL%')\n> AND (b.indid=? and b.divid=? and b.catid=?)\";\n> \n> you're showing a plan of \n> \n> Unique (cost=1271.15 rows=5 width=84)\n> -> Sort (cost=1271.15 rows=5 width=84)\n> -> Nested Loop (cost=1271.15 rows=5 width=84)\n> -> Index Scan using aecwebentry_primary on aecwebentry b (cost=1269.08 rows=1 width=60)\n> -> Index Scan using aecentmain_primary on aecentmain a (cost=2.07 rows=16560 width=24)\n> \n> which indicates that the optimizer is guessing only one match in\n> aecwebentry and is therefore putting it on the outside of the nested\n> loop (so that the inner scan over aecentmain would only have to be\n> done once, if it's guessing right). But in a later message you\n> say that the actual number of hits is more like 39 for aecwebentry\n> and one for aecentmain. Which means that the nested loop would go\n> faster if it were done the other way round, aecentmain on the outside.\n> I'm not sure of a way to force the system to try it that way, though.\n> \n> The other question is why is it using a nested loop at all, rather\n> than something more intelligent like merge or hash join. Presumably\n> the optimizer thinks those would be more expensive, but it might be\n> wrong.\n> \n> You could try forcing selection of merge and hash joins for this\n> query and see (a) what kind of plan do you get, (b) how long does\n> it really take? To do that, start psql with PGOPTIONS environment\n> variable set:\n> \n> PGOPTIONS=\"-fn -fh\"\t# forbid nestloop and hash, ie, force mergejoin\n> \n> PGOPTIONS=\"-fn -fm\"\t# forbid nestloop and merge, ie, force hashjoin\n> \n> Also, I don't think you ever mentioned exactly what the available\n> indexes are on these tables?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 22 Sep 1999 21:25:03 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Now, something does bother me here, and I'm not sure if its a problem we\n> need to address, or if its expected, but if I remove the index\n> aecwebentry_divid, it reverts to using aecwebentry_primary and increases\n> the query time to 12secs, which is:\n> create unique index aecWebEntry_primary on aecWebEntry ( indid,divid,catid,id,mid);\n> Should it do that?\n\nYeah, that does seem odd. The other way is presumably visiting the\naecwebentry tuples in a different order (the one induced by the other\nindex), but I don't see why that should produce a 10:1 difference in\nruntime.\n\nCan you send me the EXPLAIN VERBOSE output for the query with and\nwithout the extra index? (Preferably the prettyprinted version from\nthe postmaster log file, not what comes out as a NOTICE...)\n\nAlso, I assume you found that merge or hash join wasn't any better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 09:46:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL? "
}
] |
[
{
"msg_contents": "Tom, can you check to see if\n\n insert into <tablename> default values;\n\nworks on an unmodified current tree? I've got things ripped apart, but\nI would have expected this to work, and am suspecting that it is a\nproblem introduced in your rewrite of this area a month or two ago.\n(On a table without explicit default values, it should fill with\nNULLs, but on my system I get an Assert failure because the target\nlist is never filled in.)\n\nAs usual, there is no coverage of this in the regression tests, so\nthere is no reason we should have caught this earlier... :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 15:24:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERT/DEFAULT VALUES broken?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Tom, can you check to see if\n> insert into <tablename> default values;\n\n> works on an unmodified current tree? I've got things ripped apart, but\n> I would have expected this to work, and am suspecting that it is a\n> problem introduced in your rewrite of this area a month or two ago.\n\nIt bombs for me too, so I suspect you are right that I broke it when\nI rearranged the analysis of INSERT. It's probably a minor oversight\nsomeplace in there.\n\nDo you want me to fix it in current tree, or would it be wasted work\nconsidering that you are busy doing major parser rearrangements\nyourself?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 11:59:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSERT/DEFAULT VALUES broken? "
},
{
"msg_contents": "> > Tom, can you check to see if\n> > insert into <tablename> default values;\n> > works on an unmodified current tree? I've got things ripped apart, but\n> > I would have expected this to work, and am suspecting that it is a\n> > problem introduced in your rewrite of this area a month or two ago.\n> It bombs for me too, so I suspect you are right that I broke it when\n> I rearranged the analysis of INSERT. It's probably a minor oversight\n> someplace in there.\n> Do you want me to fix it in current tree, or would it be wasted work\n> considering that you are busy doing major parser rearrangements\n> yourself?\n\nIf it isn't too much trouble, it would be great if you could look at\nit. I'd like to stay focused on the join syntax (for inner joins at\nthe moment), but need to regression test things like this to make sure\nnew stuff hasn't introduced breakage...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 16:26:18 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INSERT/DEFAULT VALUES broken?"
},
{
"msg_contents": ">>>> insert into <tablename> default values;\n\n>> It bombs for me too, so I suspect you are right that I broke it when\n>> I rearranged the analysis of INSERT. It's probably a minor oversight\n>> someplace in there.\n\nNope, not a parser problem at all; rewriter brain damage. It's been\nbroken at least since 6.4, but only if you do INSERT ... DEFAULT VALUES\ninto a table that has no columns with default values, and only if you\nhave Asserts turned on (so the average user wouldn't see it anyway).\nFix is to dike out incorrect Assert...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 13:25:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSERT/DEFAULT VALUES broken? "
},
{
"msg_contents": "> >>>> insert into <tablename> default values;\n> Nope, not a parser problem at all; rewriter brain damage. It's been\n> broken at least since 6.4, but only if you do INSERT ... DEFAULT VALUES\n> into a table that has no columns with default values, and only if you\n> have Asserts turned on (so the average user wouldn't see it anyway).\n> Fix is to dike out incorrect Assert...\n\nSorry about that; I usually don't run with Assert enabled, so wouldn't\nhave caught it earlier. Thanks for looking into it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 20 Sep 1999 03:06:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INSERT/DEFAULT VALUES broken?"
}
] |
[
{
"msg_contents": "\nI noticed that indexes are not used sometimes when they could speed up\nqueries:\n\nexplain select * from auth where uid=30;\n Index Scan using auth_uid_key on auth (cost=2.05 rows=1 width=40)\n\nexplain select * from auth where uid<30;\n Seq Scan on auth (cost=2.06 rows=11 width=40)\n\nexplain select * from auth order by uid; \n Sort (cost=2.06 rows=32 width=40)\n -> Seq Scan on auth (cost=2.06 rows=32 width=40)\n\nare there any ways to speed up queries like these?\nthe exact usage alg. of indexes is documented somewhere?\nwhen is this going to be fixed?\n\n\nfinally some enhancement ideas:\n\npersistent views: like select into, but the view gets updated every time\nthe table(s) it was created from change. (gives no further functionality\nover views, but when used wisely, can speed up things)\n\npertable fsync behaviour\n\ninmemory tables: table data should not be saved to disk (maybe except\nfor swapping), because contains rapidly changing data, which would\nexpire before restarting the backend\n\nps: sorry for my bad english \n\n--\nInfraRED of aurora-borealis/Veres Tibor\nE-Mail: [email protected]\n",
"msg_date": "Sun, 19 Sep 1999 17:29:57 +0200 (CEST)",
"msg_from": "InfraRED <[email protected]>",
"msg_from_op": true,
"msg_subject": "when are indexes used?"
},
{
"msg_contents": "InfraRED <[email protected]> writes:\n> I noticed that indexes are not used sometimes when they could speed up\n> queries:\n\n> explain select * from auth where uid=30;\n> Index Scan using auth_uid_key on auth (cost=2.05 rows=1 width=40)\n\n> explain select * from auth where uid<30;\n> Seq Scan on auth (cost=2.06 rows=11 width=40)\n\n> explain select * from auth order by uid; \n> Sort (cost=2.06 rows=32 width=40)\n> -> Seq Scan on auth (cost=2.06 rows=32 width=40)\n\nWith only 32 rows in the table, I suspect the machine is making the\nright choices here. (If you actually have more than 32 rows then you\nneed to vacuum to update the stats...) Index scans are not some sort of\nfree magic solution; they cost a lot more per row scanned than\nsequential scans. They aren't necessarily cheaper than a sequential\nscan plus in-memory sort, either.\n\nThe system uses an index scan when it's possible and apparently cheaper\nthan a sequential scan. There are some problems with its estimation\nof the relative costs, which I'm hoping to fix for 6.6. However, the\nproblems seem to be that it's *under* estimating the cost of indexscans,\nnot overestimating them.\n\n> persistent views: like select into, but the view gets updated every time\n> the table(s) it was created from change. (gives no further functionality\n> over views, but when used wisely, can speed up things)\n\nThink you can do this already with rules and/or triggers. It takes some\nthought though. Maybe some documentation with a worked-out example\nwould be a good idea.\n\n> inmemory tables: table data should not be saved to disk (maybe except\n> for swapping), because contains rapidly changing data, which would\n> expire before restarting the backend\n\nYou can get pretty close to this already with fsync off: if you're\ntouching the table constantly then all its pages will remain in buffer\ncache. A typical Unix system won't bother to write out modified\npages oftener than once every 30 sec, which is hardly worth worrying\nabout.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 10:59:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [GENERAL] when are indexes used? "
},
{
"msg_contents": "\n>> explain select * from auth order by uid; \n>> Sort (cost=2.06 rows=32 width=40)\n>> -> Seq Scan on auth (cost=2.06 rows=32 width=40)\n> \n> With only 32 rows in the table, I suspect the machine is making the\n> right choices here. (If you actually have more than 32 rows then you\n> need to vacuum to update the stats...) Index scans are not some sort of\n> free magic solution; they cost a lot more per row scanned than\n> sequential scans. They aren't necessarily cheaper than a sequential\n> scan plus in-memory sort, either.\n\n\nI did't know about these guesses, and I provided these only for example.. My\nreal problem is with a ~6000 row database and a select * ... order by query\nwhich takes more than 5 sec. The same query runs for less than 0.1 sec on mssql\n:-((\n\n--\nInfraRED of aurora-borealis/Veres Tibor\nE-Mail: [email protected]\n",
"msg_date": "Thu, 23 Sep 1999 17:45:01 +0200 (CEST)",
"msg_from": "InfraRED/Veres Tibor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [GENERAL] when are indexes used?"
}
] |
[
{
"msg_contents": "Yes, sorry. Typo. I MEANT to put the parens around\nthe field list, but... \n\nMike Mascari\n\n--- Thomas Lockhart <[email protected]>\nwrote:\n> <snip>\n> > > AND (a.id,a.mid = b.id,b.mid)\n> > > AND (b.catid,b.indid,b.divid =\n> c.id,c.ppid,c.pid);\n> > ... the above is not valid in PostgreSQL, it\n> seems...\n> \n> I have to resort to looking at gram.y for this,\n> since I currently have\n> the Postgres parser in bits and pieces all over the\n> garage floor ;)\n> \n> The expressions are *almost* valid for Postgres. The\n> difference is\n> that you need to put parens around each side of the\n> \"row expression\":\n> \n> | '(' row_descriptor ')' row_op '('\n> row_descriptor ')'\n> {\n> $$ = makeRowExpr($4, $2, $6);\n> }\n> ;\n> \n> I had implemented this using Date and Darwen as a\n> reference, and afaik\n> the SQL standard (and any sensible parser)\n> *requires* parens around\n> the row expression, referred to in gram.y as a \"row\n> descriptor\".\n> \n> So, the following should work:\n> \n> AND ((a.id,a.mid) = (b.id,b.mid))\n> AND ((b.catid,b.indid,b.divid) =\n> (c.id,c.ppid,c.pid));\n> \n> \n> - Thomas\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Sun, 19 Sep 1999 10:21:46 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [6.5.2] join problems ..."
}
] |
[
{
"msg_contents": "You may recall I was complaining a while back of \"out of free buffers:\ntime to abort !\" errors when running the regression tests with\nnonstandard optimizer flags. Those are still there, but still hard to\nreproduce. Also, I have been trying to fix ALTER TABLE RENAME so that\nit flushes buffers for the target table before renaming the underlying\nfiles (otherwise subsequent mdblindwrt will fail), and have been seeing\nthat code fail because of buffers being left pinned (refcount > 0) when\nthe only running backend claims that it does not have them pinned\n(PrivateRefCount & LastRefCount are 0). So I am pretty sure that\nsomething is rotten in the buffer refcount accounting.\n\nIn trying to understand what the code is doing, I am confused by the\nbuffer refcount save/restore mechanism. Why does the executor want\nto save/restore buffer refcounts? I can sort of see that that might\nbe a way to clean up buffers that have been pinned and need to be\nunpinned, but it seems like it's a kluge around failure to unpin in\nthe code that did the pinning, if so. If it *is* a way to do that,\nshouldn't BufferRefCountRestore unpin the buffer completely if it\nrestores PrivateRefCount & LastRefCount to 0? I am not sure that this\nis where the refcount is getting leaked, but it looks like a possibility.\n\nAlso, it bothers me that there is a separation between PrivateRefCount\nand LastRefCount. Why not just have PrivateRefCount and let the\nsave/restore mechanisms save/restore those values, without zeroing out\nPrivateRefCount during BufferRefCountReset? The zeroing seems to have\nthe effect of having BufferValid claim in the inner executor context\nthat buffers pinned in the outer executor context aren't pinned ---\nwhich is weird at best.\n\nIf anyone understands why this mechanism is designed this way,\nplease tell me about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Sep 1999 14:48:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone understand shared buffer refcount mechanism?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> In trying to understand what the code is doing, I am confused by the\n> buffer refcount save/restore mechanism. Why does the executor want\n> to save/restore buffer refcounts? I can sort of see that that might\n\n...\n\n> If anyone understands why this mechanism is designed this way,\n> please tell me about it.\n\nThis bothered me for long time too.\nThe only explanation I see in execMain.c:\n\n/*\n * reset buffer refcount. the current refcounts are saved and will be\n * restored when ExecutorEnd is called\n *\n * this makes sure that when ExecutorRun's are called recursively as for\n * postquel functions, the buffers pinned by one ExecutorRun will not\n * be unpinned by another ExecutorRun.\n */\n\nBut buffers pinned by one Executor invocation SHOULDN'T\nbe unpinned by another one (if there are no bugs in code,\nbut this is another story).\n\nSo, try to remove this save/restore mechanism and let's see...\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 09:56:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Anyone understand shared buffer refcount mechanism?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> In trying to understand what the code is doing, I am confused by the\n>> buffer refcount save/restore mechanism. Why does the executor want\n>> to save/restore buffer refcounts? I can sort of see that that might\n\n> This bothered me for long time too.\n> The only explanation I see in execMain.c:\n\n> * this makes sure that when ExecutorRun's are called recursively as for\n> * postquel functions, the buffers pinned by one ExecutorRun will not\n> * be unpinned by another ExecutorRun.\n\nThe case that is currently failing for me is postquel function calls\n(the \"misc\" regress test contains some, and it's spewing Buffer Leak\nnotices like crazy, now that I fixed BufferLeakCheck to notice nonzero\nLastRefCount as well as nonzero PrivateRefCount). So there's something\nrotten here. I will keep looking at it.\n\n> So, try to remove this save/restore mechanism and let's see...\n\nIt does seem that BufferRefCountRestore is actually unpinning some\nthings (things got much better after I fixed it to really do the\nunpin when restoring a nonzero refcount to zero). So I don't\nthink I want to try to take out the save/restore entirely. What\nit looks like right now is that a few specific paths through the\nexecutor restore the wrong counts...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:07:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Anyone understand shared buffer refcount mechanism? "
}
] |
[
{
"msg_contents": "Exec-on-startup was removed by Bruce long time ago.\nWhy we still attach to shmem after fork?\nOr shmem inheritance is not portable?\nAlso, all this ShmemIndex stuff seems to be useless\n(except of backend PID lookup but it's for sure\nshould be in separate hash table).\nAnd why separate shmem segment (!!!) is used for \nSlocks (ipc.c:CreateAndInitSLockMemory(), etc) - they\nuse so small amount of memory!\n\nJust wondering...\nI'm going to use old shmem init code for WAL but would like\nto denote that shmem stuff need in cleanup.\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 11:35:25 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "why do shmem attach?"
},
{
"msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Exec-on-startup was removed by Bruce long time ago.\n> Why we still attach to shmem after fork?\n\nNo idea. I know the shared memory stuff is not copy-on-write for forked\nchildren, so I am not sure why you would have to attach to it.\n\n\n> Or shmem inheritance is not portable?\n\nIf it works on your machine with it removed, commit the change and I can\ntest it here. I don't know of any portability problems with shared\nmemory children.\n\n> Also, all this ShmemIndex stuff seems to be useless\n> (except of backend PID lookup but it's for sure\n> should be in separate hash table).\n> And why separate shmem segment (!!!) is used for \n> Slocks (ipc.c:CreateAndInitSLockMemory(), etc) - they\n> use so small amount of memory!\n\nNo idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 00:04:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Or shmem inheritance is not portable?\n> \n> If it works on your machine with it removed, commit the change and I can\n> test it here. I don't know of any portability problems with shared\n> memory children.\n\nI wrote simple test program and it works under FreeBSD and Solaris\n(on Ultra). Currently I'm not able to do more. Actually, I worry\ndoes this work under MS Windows.\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 13:33:39 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Monday, September 20, 1999 2:34 PM\n> To: Bruce Momjian\n> Cc: PostgreSQL Developers List\n> Subject: Re: [HACKERS] why do shmem attach?\n>\n>\n> Bruce Momjian wrote:\n> >\n> > > Or shmem inheritance is not portable?\n> >\n> > If it works on your machine with it removed, commit the change and I can\n> > test it here. I don't know of any portability problems with shared\n> > memory children.\n>\n> I wrote simple test program and it works under FreeBSD and Solaris\n> (on Ultra). Currently I'm not able to do more. Actually, I worry\n> does this work under MS Windows.\n>\n\nWhere do we attach to shmem after fork() ?\nI couldn't find the place.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 20 Sep 1999 15:27:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Where do we attach to shmem after fork() ?\n> I couldn't find the place.\n\nOps, sorry, you're right - postinit.c:InitCommunication():\n\n if (!IsUnderPostmaster) /* postmaster already did this */\n {\n PostgresIpcKey = key;\n AttachSharedMemoryAndSemaphores(key);\n }\n\nThough, AttachSharedMemoryAndSemaphores():\n\n if (key == PrivateIPCKey)\n { \n CreateSharedMemoryAndSemaphores(key, 16);\n return;\n }\n\n... and useless shmem attachment stuff follows after this ...\n\nCleanup is still required, but subj is closed, thanks -:)\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 14:58:03 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Or shmem inheritance is not portable?\n> > \n> > If it works on your machine with it removed, commit the change and I can\n> > test it here. I don't know of any portability problems with shared\n> > memory children.\n> \n> I wrote simple test program and it works under FreeBSD and Solaris\n> (on Ultra). Currently I'm not able to do more. Actually, I worry\n> does this work under MS Windows.\n\nThis code was not added for MS Windows, so it is anyone's guess whether\nit needs it. Let's remove it and see.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 09:21:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "> Hiroshi Inoue wrote:\n> > \n> > Where do we attach to shmem after fork() ?\n> > I couldn't find the place.\n> \n> Ops, sorry, you're right - postinit.c:InitCommunication():\n> \n> if (!IsUnderPostmaster) /* postmaster already did this */\n> {\n> PostgresIpcKey = key;\n> AttachSharedMemoryAndSemaphores(key);\n> }\n> \n> Though, AttachSharedMemoryAndSemaphores():\n> \n> if (key == PrivateIPCKey)\n> { \n> CreateSharedMemoryAndSemaphores(key, 16);\n> return;\n> }\n> \n> ... and useless shmem attachment stuff follows after this ...\n> \n> Cleanup is still required, but subj is closed, thanks -:)\n\nMy guess is that this is something I missed when removing the exec().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 09:23:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Though, AttachSharedMemoryAndSemaphores():\n> if (key == PrivateIPCKey)\n> { \n> CreateSharedMemoryAndSemaphores(key, 16);\n> return;\n> }\n> ... and useless shmem attachment stuff follows after this ...\n\nThat path is used for a standalone backend. Is that useless?\n\n> Cleanup is still required, but subj is closed, thanks -:)\n\nI don't think it's worth messing with either. It'd be nice for code\nbeautification purposes to (a) combine the three shared-mem segments\nwe currently have into one, and (b) rely on the postmaster's having\nattached the segment, so that all backends will see it at the same\nlocation in their address space, which would let us get rid of the\nMAKE_OFFSET/MAKE_PTR cruft. But getting the full benefit would\nrequire cleaning up a lot of code, and it just doesn't seem like\na high-priority task. I'm also a little worried that we'd be\nsacrificing portability --- some day we might be glad that we can\nmove those segments around...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 09:44:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach? "
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Also, all this ShmemIndex stuff seems to be useless\n> (except of backend PID lookup but it's for sure\n> should be in separate hash table).\n\nHave I got a deal for you ;-). I have uncommitted changes that add\na pointer (SHMEM_OFFSET that is) to each backend's PROC struct into\nthe per-backend info array that already existed in shmem.c. The\nroutines in shmem.c that searched for PROC structures are now in\nsinval.c, and just do a simple scan of the ProcState array to find\nthe PROC structs. They should be a whole lot faster --- which is\ngood since these things run with spinlocks held...\n\nThese changes are intermixed with other things that are currently\ntriggering a lot of NOTICE: Buffer Leak messages in the regress tests,\nso I don't want to commit until I've puzzled out the buffer refcount\nissue. But I've got 'em and they seem to work fine.\n\n> And why separate shmem segment (!!!) is used for \n> Slocks (ipc.c:CreateAndInitSLockMemory(), etc) - they\n> use so small amount of memory!\n\nHistorical reasons I suppose. shmem.c does assume that spinlocks\nare already up and running when it is initialized, so combining\neverything into one segment would require care. But it's surely\ndoable if someone wants to take the time. (I don't.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 09:55:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > Though, AttachSharedMemoryAndSemaphores():\n> > if (key == PrivateIPCKey)\n> > {\n> > CreateSharedMemoryAndSemaphores(key, 16);\n> > return;\n> > }\n> > ... and useless shmem attachment stuff follows after this ...\n> \n> That path is used for a standalone backend. Is that useless?\n\nIsn't key equal to PrivateIPCKey for standalone backend?\n\n> \n> > Cleanup is still required, but subj is closed, thanks -:)\n> \n> I don't think it's worth messing with either. It'd be nice for code\n> beautification purposes to (a) combine the three shared-mem segments\n> we currently have into one, and (b) rely on the postmaster's having\n\nI would try to use more than one segment for buffer pool if\nmax seg size is not enough for all buffers.\n\n> attached the segment, so that all backends will see it at the same\n> location in their address space, which would let us get rid of the\n> MAKE_OFFSET/MAKE_PTR cruft. But getting the full benefit would\n> require cleaning up a lot of code, and it just doesn't seem like\n> a high-priority task. I'm also a little worried that we'd be\n> sacrificing portability --- some day we might be glad that we can\n> move those segments around...\n\nWe can't. MAKE_OFFSET/MAKE_PTR was used because of after\nfork/exec/shmat backend' ShmemBase was different from\npostmaster' one. But we can't move *BufferDescriptors \nif some running backend already uses BufferDescriptors.\nBut I agreed - this is not high-priority task -:)\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 22:09:20 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > Also, all this ShmemIndex stuff seems to be useless\n> > (except of backend PID lookup but it's for sure\n> > should be in separate hash table).\n> \n> Have I got a deal for you ;-). I have uncommitted changes that add\n> a pointer (SHMEM_OFFSET that is) to each backend's PROC struct into\n> the per-backend info array that already existed in shmem.c. The\n> routines in shmem.c that searched for PROC structures are now in\n> sinval.c, and just do a simple scan of the ProcState array to find\n> the PROC structs. They should be a whole lot faster --- which is\n> good since these things run with spinlocks held...\n\nNice. I have new member for PROC that should be searched\nsometime -:)\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 22:12:28 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "> I don't think it's worth messing with either. It'd be nice for code\n> beautification purposes to (a) combine the three shared-mem segments\n> we currently have into one, and (b) rely on the postmaster's having\n> attached the segment, so that all backends will see it at the same\n> location in their address space, which would let us get rid of the\n> MAKE_OFFSET/MAKE_PTR cruft. But getting the full benefit would\n> require cleaning up a lot of code, and it just doesn't seem like\n> a high-priority task. I'm also a little worried that we'd be\n> sacrificing portability --- some day we might be glad that we can\n> move those segments around...\n\nMy opinion is that this code is complex enough without additional\ncomplexity. If something can be removed/cleaned, why not do it? It is\nusually very easy to do and doesn't take much time. The next person who\nhas to mess with it will thank us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 10:13:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> I don't think it's worth messing with either. It'd be nice for code\n>> beautification purposes to (a) combine the three shared-mem segments\n>> we currently have into one, and (b) rely on the postmaster's having\n\n> I would try to use more than one segment for buffer pool if\n> max seg size is not enough for all buffers.\n\nAh, that would be a nice end-run around kernels with small SHMMAX,\nwouldn't it?\n\n>> I'm also a little worried that we'd be\n>> sacrificing portability --- some day we might be glad that we can\n>> move those segments around...\n\n> We can't. MAKE_OFFSET/MAKE_PTR was used because of after\n> fork/exec/shmat backend' ShmemBase was different from\n> postmaster' one. But we can't move *BufferDescriptors \n> if some running backend already uses BufferDescriptors.\n\nRight, we can't relocate a segment within the address space of\nan already-running backend. What I meant was that being able\nto put it at different addresses in different backends might be\nneeded again someday, even though right now we don't need it.\n\n> But I agreed - this is not high-priority task -:)\n\nYup. Plenty of high-priority ones, too...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:36:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach? "
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> Have I got a deal for you ;-). I have uncommitted changes that add\n>> a pointer (SHMEM_OFFSET that is) to each backend's PROC struct into\n>> the per-backend info array that already existed in shmem.c.\n\n> Nice. I have new member for PROC that should be searched\n> sometime -:)\n\nOK, cool. Easy enough to add now. The reason I did this was that\nI added to PROC the OID of the database the backend is attached to,\nso that I could make a routine to tell whether any running backends\nare connected to a given database. I couldn't quite stomach adding\nyet another ShmemIndex-traverser to shmem.c, so...\n\n(I'm sure you can see already where I'm going with that: DESTROY\nDATABASE now refuses to destroy a database that has running backends.\nI got burnt that way once too often. The interlock against\nhalfway-started backends was a tad tricky, but I think it works.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:49:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] why do shmem attach? "
}
] |
[
{
"msg_contents": "Tom Lane <[email protected]> writes:\n>>>> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n>>>> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n>\n>> No --- if he were, it'd be all CPU time, not 2.7% CPU usage.\n>\n>Er, wait a second. Are we measuring backend-process runtime here,\n>or is that the result of 'time' applied to a *client* ?\n\nYeah, that would explain a lot. When I first saw the numbers, I was so\nexcited because they showed that PostgreSQL is *faster* than MySQL (with\nmore memory, and better I/O).\n\nThat didn't make any sense, though. MySQL is faster than every real DBMS,\nbecause it doesn't have transactions, triggers, locking, or any other sort\nof useful features to slow it down.\n\nThe question should always be, is PostgreSQL faster than Oracle, Informix,\nor Sybase?\n\n\t-Michael\n\n",
"msg_date": "Mon, 20 Sep 1999 11:44:55 +0800 (CST)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "> Tom Lane <[email protected]> writes:\n> >>>> MySQL: 0.498u 0.150s 0:02.50 25.6% 10+1652k 0+0io 0pf+0w\n> >>>> PgSQL: 0.494u 0.061s 0:19.78 2.7% 10+1532k 0+0io 0pf+0w\n> >\n> >> No --- if he were, it'd be all CPU time, not 2.7% CPU usage.\n> >\n> >Er, wait a second. Are we measuring backend-process runtime here,\n> >or is that the result of 'time' applied to a *client* ?\n> \n> Yeah, that would explain a lot. When I first saw the numbers, I was so\n> excited because they showed that PostgreSQL is *faster* than MySQL (with\n> more memory, and better I/O).\n> \n> That didn't make any sense, though. MySQL is faster than every real DBMS,\n> because it doesn't have transactions, triggers, locking, or any other sort\n> of useful features to slow it down.\n> \n> The question should always be, is PostgreSQL faster than Oracle, Informix,\n> or Sybase?\n\nI am told we are the same as Ingres, and slower than Oracle with no -F,\nand faster than Oracle with -F.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 00:06:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "> \n> > I am told we are the same as Ingres, and slower than Oracle with no -F,\n> > and faster than Oracle with -F.\n> \n> What is \"-F\"?\n> \n\n-F is postgres option for no-fsync.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 09:27:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "In article <[email protected]>,\nBruce Momjian <[email protected]> wrote:\n>> \n>> > I am told we are the same as Ingres, and slower than Oracle with no -F,\n>> > and faster than Oracle with -F.\n>> \n>> What is \"-F\"?\n>> \n>\n>-F is postgres option for no-fsync.\n\nDoes that matter on read-only selects?\n\nMight some future version of postgresql have an option to turn\noff transaction support to match mysql speed for the situations\nwhere that is more important than the ability to roll back?\n\n Les Mikesell\n [email protected]\n",
"msg_date": "20 Sep 1999 21:59:26 -0500",
"msg_from": "[email protected] (Leslie Mikesell)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
},
{
"msg_contents": "\n> I am told we are the same as Ingres, and slower than Oracle with no -F,\n> and faster than Oracle with -F.\n\nWhat is \"-F\"?\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 01 Jan 2000 10:42:56 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] All things equal, we are still alot slower then MySQL?"
}
] |
[
{
"msg_contents": ">\n> Hi , Jan\n>\n> my name is Max .\n\nHi Max,\n\n>\n> I have contributed to SPI interface ,\n> that with external Trigger try to make\n> a referential integrity.\n>\n> If I can Help , in something ,\n> I'm here .\n>\n\n You're welcome.\n\n I've CC'd the hackers list because we might get some ideas\n from there too (and to surface once in a while - Bruce\n already missed me).\n\n Currently I'm very busy for serious work so I don't find\n enough spare time to start on such a big change to\n PostgreSQL. But I'd like to give you an overview of what I\n have in mind so far so you can decide if you're able to help.\n\n Referential integrity (RI) is based on constraints defined in\n the schema of a database. There are some different types of\n constraints:\n\n 1. Uniqueness constraints.\n\n 2. Foreign key constraints that ensure that a key value used\n in an attribute exists in another relation. One\n constraint must ensure you're unable to INSERT/UPDATE to\n a value that doesn't exist, another one must prevent\n DELETE on a referenced key item or that it is changed\n during UPDATE.\n\n 3. Cascading deletes that let rows referring to a key follow\n on DELETE silently.\n\n Even if not defined in the standard (AFAIK) there could be\n others like letting references automatically follow on UPDATE\n to a key value.\n\n All constraints can be enabled and/or default to be deferred.\n That means, that the RI checks aren't performed when they are\n triggerd. Instead, they're checked at transaction end or if\n explicitly invoked by some special statement. This is really\n important because someone must be able to setup cyclic RI\n checks that could never be satisfied if the checks would be\n performed immediately. The major problem on this is the\n amount of data affected until the checks must be performed.\n The number of statements executed, that trigger such deferred\n constraints, shouldn't be limited. And one single\n INSERT/UPDATE/DELETE could affect thousands of rows.\n\n Due to these problems I thought, it might not be such a good\n idea to remember CTID's or the like to get back OLD/NEW rows\n at the time the constraints are checked. Instead I planned to\n misuse the rule system for it. Unfortunately, the rule system\n has damned tricky problems itself when it comes to having-,\n distinct and other clauses and extremely on aggregates and\n subselects. These problems would have to get fixed first. So\n it's a solution that cannot be implemented right now.\n\n Fallback to CTID remembering though. There are problems too\n :-(. Let's enhance the trigger mechanism with a deferred\n feature. First this requires two additional bool attributes\n in the pg_trigger relation that tell if this trigger is\n deferrable and if it is deferred by default. While at it we\n should add another bool that tells if the trigger is enabled\n (ALTER TRIGGER {ENABLE|DISABLE} trigger).\n\n Second we need an internal list of triggers, that are\n currently DEFINED AS DEFERRED. Either because they default to\n it, or the user explicitly asked to deferr it.\n\n Third we need an internal list of triggers that must be\n invoked later because at the time an event occured where they\n should have been triggered, they appeared in the other list\n and their execution is delayed until transaction end or\n explicit execution. This list must remember the OID of the\n trigger to invoke (to identify the procedure and the\n arguments), the relation that caused the trigger and the\n CTID's of the OLD and NEW row.\n\n That last list could grow extremely! Think of a trigger\n that's executing commands over SPI which in turn activate\n deferred triggers. Since the order of trigger execution is\n very important for RI, I can't see any chance to\n simplify/condense this information. Thus it is 16 bytes at\n least per deferred trigger call (2 OID's plus 2 CTID's). I\n think one or more temp files would fit best for this.\n\n A last tricky point is if one of a bunch of deferred triggers\n is explicitly called for execution. At this time, the entries\n for it in the temp file(s) must get processed and marked\n executed (maybe by overwriting the triggers OID with the\n invalid OID) while other trigger events still have to get\n recorded.\n\n Needless to say that reading thousands of those entries just\n to find a few isn't good on performance. But better have this\n special case slow that dealing with hundreds of temp files or\n other overhead slowing down the usual case where ALL deferred\n triggers get called at transaction end.\n\n Trigger invocation is simple now - fetch the OLD and NEW rows\n by CTID and execute the trigger as done by the trigger\n manager. Oh - well - vacuum shouldn't touch relations where\n deferred triggers are outstanding. Might require some\n special lock entry - Vadim?\n\n Did I miss something?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 20 Sep 1999 11:59:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Referential Integrity In PostgreSQL"
}
] |
[
{
"msg_contents": "I know people were wondering about Jan, so I just talked to him via\ne-mail, and he has been busy on a big project.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 09:35:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status on Jan Wieck"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> I know people were wondering about Jan, so I just talked to him via\n> e-mail, and he has been busy on a big project.\n\nIs he going to implement RI for 6.6?\n\nVadim\n",
"msg_date": "Mon, 20 Sep 1999 21:40:45 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n> \n> Is he going to implement RI for 6.6?\n\nOK, let's CC him with the question. Jan, can you do referential\nintegrity for 6.6?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 10:12:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I know people were wondering about Jan, so I just talked to him via\n> e-mail, and he has been busy on a big project.\n\nGood ... I was starting to fear he'd been run over by a truck or\nsomething :-(\n\nDoes he have any idea when/if he'll return to Postgres hacking?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 1999 10:58:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n> \n> Good ... I was starting to fear he'd been run over by a truck or\n> something :-(\n\nYep. He says \"don't worry\".\n\n> Does he have any idea when/if he'll return to Postgres hacking?\n\nI didn't have the heart to ask him. I just hit him with the referential\nintegrity question. Let's see what he says.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 11:30:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n> \n> Good ... I was starting to fear he'd been run over by a truck or\n> something :-(\n> \n> Does he have any idea when/if he'll return to Postgres hacking?\n\nThat brings up another issue.\n\nWhy do people think programmers are throwing themselves in front of\ntrucks? Any idea? I hear it often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 11:32:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n> \n> Is he going to implement RI for 6.6?\n\nJan, you know we miss you when you have your own subject thread on the\nhackers list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 11:33:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": ">\n> Bruce Momjian wrote:\n> >\n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n>\n> Is he going to implement RI for 6.6?\n\n Depends on WHEN 6.6 is planned to go into feature-freeze.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 20 Sep 1999 18:57:09 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Is he going to implement RI for 6.6?\n> \n> Depends on WHEN 6.6 is planned to go into feature-freeze.\n\nWell, I believe that we have at least 3 months before 1st beta.\nWe need in DIRTY READs for RI and I'll implement them.\nIf you'll not be able to do RI itself then we might\nchange refint.c to use DIRTY READs and so avoid LOCK TABLE \non application level (i.e. restore pre-6.5 refint.c using).\n\nVadim\n",
"msg_date": "Tue, 21 Sep 1999 01:23:56 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "> >\n> > Bruce Momjian wrote:\n> > >\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Is he going to implement RI for 6.6?\n> \n> Depends on WHEN 6.6 is planned to go into feature-freeze.\n\nYou tell us... We clearly could wait if you have some idea on a\ntimeframe. There has been no talk of a 6.6 release schedule yet. Vadim\nis still working on logging, and Tom Lane is working too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 13:39:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "> > Depends on WHEN 6.6 is planned to go into feature-freeze.\n> \n> Well, I believe that we have at least 3 months before 1st beta.\n> We need in DIRTY READs for RI and I'll implement them.\n> If you'll not be able to do RI itself then we might\n> change refint.c to use DIRTY READs and so avoid LOCK TABLE \n> on application level (i.e. restore pre-6.5 refint.c using).\n\nYikes. Three months. That puts us at release in mid-January.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 14:14:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > I know people were wondering about Jan, so I just talked to him via\n> > > > e-mail, and he has been busy on a big project.\n> > >\n> > > Is he going to implement RI for 6.6?\n> > \n> > Depends on WHEN 6.6 is planned to go into feature-freeze.\n> \n> You tell us... We clearly could wait if you have some idea on a\n> timeframe. There has been no talk of a 6.6 release schedule yet. Vadim\n> is still working on logging, and Tom Lane is working too.\n\nI want to get as much of the JDBC api done for 6.6 as well, so I think\nthere is going to be a lot of new stuff in it.\n\nPeter\n\n--\n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n",
"msg_date": "Mon, 20 Sep 1999 19:50:43 +0100 (GMT)",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Jan Wieck wrote:\n\n> >\n> > Bruce Momjian wrote:\n> > >\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Is he going to implement RI for 6.6?\n> \n> Depends on WHEN 6.6 is planned to go into feature-freeze.\n\nAfter you implement RI? :) Since we've gone the -STABLE branch\nfixes/releases route, 6.6 is less of a panic, so if you have some sort of\na timeline for this, its *very* easy to work around it :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:29:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> > \n> > Good ... I was starting to fear he'd been run over by a truck or\n> > something :-(\n> > \n> > Does he have any idea when/if he'll return to Postgres hacking?\n> \n> That brings up another issue.\n> \n> Why do people think programmers are throwing themselves in front of\n> trucks? Any idea? I hear it often.\n\nFew of us get the window office on a high enough floor to jump? :) Those\nare reserved for the accountants and stock brokers :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:30:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": ">\n> Bruce Momjian <[email protected]> writes:\n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n>\n> Good ... I was starting to fear he'd been run over by a truck or\n> something :-(\n>\n> Does he have any idea when/if he'll return to Postgres hacking?\n\n As soon as I see the light at the end of the tunnel (and am\n sure that it's not the coming train). I think it will take\n another two or three weeks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 10:41:05 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": ">\n> > Bruce Momjian <[email protected]> writes:\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Good ... I was starting to fear he'd been run over by a truck or\n> > something :-(\n> >\n> > Does he have any idea when/if he'll return to Postgres hacking?\n>\n> That brings up another issue.\n>\n> Why do people think programmers are throwing themselves in front of\n> trucks? Any idea? I hear it often.\n\n Maybe because it's one of the most failsafe methods?\n Additionally the risk for the truck driver isn't as high as\n it would be for the driver of an A-Class (besides that an A-\n Class driver would treat the whole think like an elk-test -\n and we all know what the result of that is :-).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 10:56:22 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": ">\n> > > Depends on WHEN 6.6 is planned to go into feature-freeze.\n> >\n> > Well, I believe that we have at least 3 months before 1st beta.\n> > We need in DIRTY READs for RI and I'll implement them.\n> > If you'll not be able to do RI itself then we might\n> > change refint.c to use DIRTY READs and so avoid LOCK TABLE\n> > on application level (i.e. restore pre-6.5 refint.c using).\n>\n> Yikes. Three months. That puts us at release in mid-January.\n\n Three months - sounds fine. I just posted another few ideas\n on the issue. After rereading it, I'm sure now that doing RI\n with the rulesystem would open a horrible can of worms.\n Especially in the case a trigger procedure is using a query\n which in turn triggers a deferred rule.\n\n Each trigger invocation (maybe for thousands of rows) will\n execute it's own queries, resulting in a separate parsetree\n for the deferred actions. Where to hold them? Parsetrees can\n be huge!\n\n I'm sure now that remembering the CTID's of the tuples that\n must get reread to fire the trigger is the smaller problem.\n\n I need a little break in my current project - thus I'll take\n a look at it NOW!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 11:11:22 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Jan Wieck wrote:\n\n> >\n> > Bruce Momjian <[email protected]> writes:\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Good ... I was starting to fear he'd been run over by a truck or\n> > something :-(\n> >\n> > Does he have any idea when/if he'll return to Postgres hacking?\n> \n> As soon as I see the light at the end of the tunnel (and am\n> sure that it's not the coming train). I think it will take\n> another two or three weeks.\n\nFigure a Jan/Feb release for 6.6...enough time? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 09:17:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck"
},
{
"msg_contents": "Jan Wieck wrote:\n> >\n> > Bruce Momjian <[email protected]> writes:\n> > > I know people were wondering about Jan, so I just talked to him via\n> > > e-mail, and he has been busy on a big project.\n> >\n> > Good ... I was starting to fear he'd been run over by a truck or\n> > something :-(\n> >\n> > Does he have any idea when/if he'll return to Postgres hacking?\n> \n> As soon as I see the light at the end of the tunnel (and am\n> sure that it's not the coming train). I think it will take\n> another two or three weeks.\n\nDidn't you hear Jan? Due to budget constraints, the light at the end of the tunnel has been turned off :-)\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Wed, 22 Sep 1999 03:29:48 -0400",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status on Jan Wieck "
}
] |
[
{
"msg_contents": "Here in the uk, it's usually a number 42 Bus...\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: 20 September 1999 16:32\nTo: Tom Lane\nCc: \"PostgreSQL-development\"@candle.pha.pa.us;\[email protected]; Jan Wieck\nSubject: Re: [HACKERS] Status on Jan Wieck\n\n\n> Bruce Momjian <[email protected]> writes:\n> > I know people were wondering about Jan, so I just talked to him via\n> > e-mail, and he has been busy on a big project.\n> \n> Good ... I was starting to fear he'd been run over by a truck or\n> something :-(\n> \n> Does he have any idea when/if he'll return to Postgres hacking?\n\nThat brings up another issue.\n\nWhy do people think programmers are throwing themselves in front of\ntrucks? Any idea? I hear it often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n",
"msg_date": "Mon, 20 Sep 1999 17:27:27 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Status on Jan Wieck"
}
] |
[
{
"msg_contents": "\nHi,\n\nUsing PostgreSQL 6.5.2 just compiled on a RedHat 6.0 box.\n\n------------------------\nBEGIN;\n\nSELECT username,count(*) AS nsessions INTO TEMP TABLE active_nsessions FROM\nactive GROUP BY username\n\nSELECT username,count(*) AS nlinks INTO TEMP TABLE active_nlinks FROM active\nGROUP BY username,port,server\n\nSELECT * FROM active,counters,users,active_nsessions,active_nlinks WHERE\nactive.username=users.username AND active.username=counters.username AND\nactive.username=active_nsessions.username AND\nactive.username=active_nlinks.username\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n------------------------\n\nActive table is empty with this structure:\n\nCREATE TABLE active\n(\n username varchar(32),\n server varchar(35),\n pop varchar(20),\n remAddr varchar(30),\n port varchar(15),\n service varchar(10),\n addr inet,\n privilege int2,\n authenmethod int2,\n authentype int2,\n authenservice int2,\n starttime datetime,\n taskid int4,\n callerid varchar(21),\n callednumber varchar(21),\n rxrate int4,\n txrate int4,\n\n bytesin int8,\n bytesout int8,\n paksin int8,\n paksout int8,\n\n watchdog_timeout timespan,\n watchdog_lastreset datetime,\n\n logout_requested bool,\n logout_request_time datetime\n);\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Mon, 20 Sep 1999 18:54:56 +0200",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Crash (maybe related to temp tables)"
}
] |
[
{
"msg_contents": "\n>> Thanks -- I did a similar change, which had similar results.\n>> \n>> I'm afraid I can't be of any help on the question of how \n>> BSDI handles shared libraries. I don't currently have a support\n>> contract with them, so I don't have a graceful \"in\" to ask them.\n>> \n>> I would say you have now duplicated the problem I was asking about, \n>> and if someone can solve it, I'll be a very happy fella.\n>\n>Does the new patch make it work, or just fail in a different way?\n\nHmmm... I might have missed a message, but if by \"the new patch\" you\nmean the patch that declares yyline in pl_comp.c, then the way to \nput it is that I get the same failure you do: plpgsql regression\ntest fails with the same (or nearly the same) error message you \ngot. The only difference between our patches is that you got rid\nof the \"extern\" in pl_comp.c and I did it in scan.l.\n\nOr, to put it another way:\n\nVirgin 6.5.2 on BSDI 4.0:\n\nRegresion test yields this error in the postmaster output on every \nattempt to use plpgsql:\n\n/usr/local/pgsql.new/bin/postmaster: can't resolve symbol 'plpgsql_yylineno'\nERROR: Load of file /usr/local/pgsql.new/lib/plpgsql.so failed: Unable to resolve symbol\n\nI then came up with the patch of removing the \"extern\" from the declaration\nof plpgsql_yylineno in scan.l (actually, it's called \"yylineno\" there and\nchanged later. Call that \"Nat Patch #1\"\n\nYou came up with a similar patch with identical consequences: you got \nrid of the \"extern\" from the delcaration of plpgsql_yylineno in \npl_comp.c. Call this \"Bruce Patch #1\".\n\n\n6.5.2 with Nat Patch #1 OR Bruce patch #1 fixes *most* of the \nplpgsql errors, but the plpgsql regression still fails, in a smaller way:\nyou get these errors on postmaster output:\n\n...\nERROR: Relation 'tmp' does not have attribute 'k'\nNOTICE: plpgsql: ERROR during compile of wslot_slotlink_view near line 1\nERROR: syntax error at or near \"q0xf2&^H^F\"\nDEBUG: Last error occured while executing PL/pgSQL function pslot_backlink_view\nDEBUG: line 1 at return\nNOTICE: plpgsql: ERROR during compile of wslot_slotlink_view near line 1\nERROR: syntax error at or near \"q0xf2&^H^F\"\nDEBUG: Last error occured while executing PL/pgSQL function pslot_backlink_view\nDEBUG: line 1 at return\n\n\nand the regression.diffs file contains this:\n\n\n*** expected/plpgsql.out Wed Sep 30 23:38:35 1998\n--- results/plpgsql.out Sun Sep 19 01:48:14 1999\n***************\n*** 1275,1319 ****\n QUERY: insert into IFace values ('IF', 'orion', 'eth0', 'WS.002.1b');\n QUERY: update PSlot set slotlink = 'HS.base.hub1.1' where slotname = 'PS.base.b2';\n QUERY: select * from PField_v1 where pfname = 'PF0_1' order by slotname;\n! pfname|slotname |backside |patch\n! ------+--------------------+--------------------------------------------------------+---------------------------------------------\n! PF0_1 |PS.base.a1 |WS.001.1a in room 001 -> Phone PH.hc001 (Hicom standard)|PS.base.ta1 -> Phone line -0 (Central call)\n! PF0_1 |PS.base.a2 |WS.001.1b in room 001 -> - |-\n! PF0_1 |PS.base.a3 |WS.001.2a in room 001 -> Phone PH.fax001 (Canon fax) |PS.base.ta2 -> Phone line -501 (Fax entrance)\n! PF0_1 |PS.base.a4 |WS.001.2b in room 001 -> - |-\n! PF0_1 |PS.base.a5 |WS.001.3a in room 001 -> - |-\n! PF0_1 |PS.base.a6 |WS.001.3b in room 001 -> - |-\n! PF0_1 |PS.base.b1 |WS.002.1a in room 002 -> Phone PH.hc002 (Hicom standard)|PS.base.ta5 -> Phone line -103\n! PF0_1 |PS.base.b2 |WS.002.1b in room 002 -> orion IF eth0 (PC) |Patchfield PF0_1 hub slot 1\n! PF0_1 |PS.base.b3 |WS.002.2a in room 002 -> Phone PH.hc003 (Hicom standard)|PS.base.tb2 -> Phone line -106\n! PF0_1 |PS.base.b4 |WS.002.2b in room 002 -> - |-\n! PF0_1 |PS.base.b5 |WS.002.3a in room 002 -> - |-\n! PF0_1 |PS.base.b6 |WS.002.3b in room 002 -> - |-\n! PF0_1 |PS.base.c1 |WS.003.1a in room 003 -> - |-\n! PF0_1 |PS.base.c2 |WS.003.1b in room 003 -> - |-\n! PF0_1 |PS.base.c3 |WS.003.2a in room 003 -> - |-\n! PF0_1 |PS.base.c4 |WS.003.2b in room 003 -> - |-\n! PF0_1 |PS.base.c5 |WS.003.3a in room 003 -> - |-\n! PF0_1 |PS.base.c6 |WS.003.3b in room 003 -> - |-\n! (18 rows)\n!\n QUERY: select * from PField_v1 where pfname = 'PF0_2' order by slotname;\n! pfname|slotname |backside |patch\n! ------+--------------------+------------------------------+----------------------------------------------------------------------\n! PF0_2 |PS.base.ta1 |Phone line -0 (Central call) |PS.base.a1 -> WS.001.1a in room 001 -> Phone PH.hc001 (Hicom standard)\n! PF0_2 |PS.base.ta2 |Phone line -501 (Fax entrance)|PS.base.a3 -> WS.001.2a in room 001 -> Phone PH.fax001 (Canon fax)\n! PF0_2 |PS.base.ta3 |Phone line -102 |-\n! PF0_2 |PS.base.ta4 |- |-\n! PF0_2 |PS.base.ta5 |Phone line -103 |PS.base.b1 -> WS.002.1a in room 002 -> Phone PH.hc002 (Hicom standard)\n! PF0_2 |PS.base.ta6 |Phone line -104 |-\n! PF0_2 |PS.base.tb1 |- |-\n! PF0_2 |PS.base.tb2 |Phone line -106 |PS.base.b3 -> WS.002.2a in room 002 -> Phone PH.hc003 (Hicom standard)\n! PF0_2 |PS.base.tb3 |Phone line -108 |-\n! PF0_2 |PS.base.tb4 |Phone line -109 |-\n! PF0_2 |PS.base.tb5 |Phone line -121 |-\n! PF0_2 |PS.base.tb6 |Phone line -122 |-\n! (12 rows)\n!\n QUERY: insert into PField values ('PF1_1', 'should fail due to unique index');\n ERROR: Cannot insert a duplicate key into a unique index\n QUERY: update PSlot set backlink = 'WS.not.there' where slotname = 'PS.base.a1';\n--- 1275,1285 ----\n QUERY: insert into IFace values ('IF', 'orion', 'eth0', 'WS.002.1b');\n QUERY: update PSlot set slotlink = 'HS.base.hub1.1' where slotname = 'PS.base.b2';\n QUERY: select * from PField_v1 where pfname = 'PF0_1' order by slotname;\n! NOTICE: plpgsql: ERROR during compile of wslot_slotlink_view near line 1\n! ERROR: parse error at or near \"q0xe2&^H^F\"\n QUERY: select * from PField_v1 where pfname = 'PF0_2' order by slotname;\n! NOTICE: plpgsql: ERROR during compile of wslot_slotlink_view near line 1\n! ERROR: parse error at or near \"q0xe2&^H^F\"\n QUERY: insert into PField values ('PF1_1', 'should fail due to unique index');\n ERROR: Cannot insert a duplicate key into a unique index\n QUERY: update PSlot set backlink = 'WS.not.there' where slotname = 'PS.base.a1';\n\n----------------------\n\n\nSo, the remaining problem is to fix those errors. Now, it's possible\nyou sent \"Bruce Patch #2\", and I missed it. Did you?\n\nOr do you still have the error from the regression test shown above?\n\nIf you do, then there's still work to do, but if you don't, I missed\nsomething -- please send it again!\n\nThanks a lot for the help! \n\n\n\n",
"msg_date": "Mon, 20 Sep 1999 14:22:39 -0400",
"msg_from": "Nat Howard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] plpgsql & bsdi 4.0 "
},
{
"msg_contents": "> \n> >> Thanks -- I did a similar change, which had similar results.\n> >> \n> >> I'm afraid I can't be of any help on the question of how \n> >> BSDI handles shared libraries. I don't currently have a support\n> >> contract with them, so I don't have a graceful \"in\" to ask them.\n> >> \n> >> I would say you have now duplicated the problem I was asking about, \n> >> and if someone can solve it, I'll be a very happy fella.\n> >\n> >Does the new patch make it work, or just fail in a different way?\n> \n> Hmmm... I might have missed a message, but if by \"the new patch\" you\n> mean the patch that declares yyline in pl_comp.c, then the way to \n> put it is that I get the same failure you do: plpgsql regression\n> test fails with the same (or nearly the same) error message you \n> got. The only difference between our patches is that you got rid\n> of the \"extern\" in pl_comp.c and I did it in scan.l.\n\n> \n> So, the remaining problem is to fix those errors. Now, it's possible\n> you sent \"Bruce Patch #2\", and I missed it. Did you?\n> \n> Or do you still have the error from the regression test shown above?\n> \n> If you do, then there's still work to do, but if you don't, I missed\n> something -- please send it again!\n\nOK, we tried the same thing, and got the same errors. Not sure about a\ncause on this one. I am Cc'ing the author. Very strange. Does anyone\nelse see plpgsql regression failures?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 14:58:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] plpgsql & bsdi 4.0"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is the output of initdb after a clean install of all the 6.5.1-2 RPMs\nI downloaded from ftp.postgreSQL.org. To install them I did an \"rpm -ivh post*\".\nI'm using RedHat 6 on x86 with 64 Meg Ram and 300 Meg of HD space left.\n \nI used \"initdb -l /usr/lib/pgsql -r /var/lib/pgsql/data -u postgres -d\" to \ninitialyse the database (This is what they say in the manual).\n\nRunning with debug mode on.\ninitdb: using /usr/lib/pgsql/local1_template1.bki.source as input to create the template database.\ninitdb: using /usr/lib/pgsql/global1.bki.source as input to create the global classes.\ninitdb: using /usr/lib/pgsql/pg_hba.conf.sample as the host-based authentication control file.\nWe are initializing the database system with username postgres (uid=104).\nThis user will own all the files and must also own the server process.\nCreating template database in /var/lib/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/var/lib/pgsql/data -d template1\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /var/lib/pgsql/data/base/template1\n\nThis is a \"ls -al /var/lib/pgsql\"\ntotal 4\ndrwx------ 3 postgres postgres 1024 Sep 20 14:09 .\ndrwxr-xr-x 15 root root 1024 Sep 17 11:07 ..\n-rw-r--r-- 1 root root 35 Sep 1 00:55 odbcinst.ini\n\nIt's obvious i'm doing something wrong! Can someone help me?\n\nDavid Godin\ngodind@REMOVE_THIS.voxco.com\n\n\n\n\n\n\n\n\nHi all,\n \nThis is the output of initdb after a clean install \nof all the 6.5.1-2 RPMs\nI downloaded from ftp.postgreSQL.org. To install \nthem I did an \"rpm -ivh post*\".\nI'm using RedHat 6 \non x86 with 64 Meg Ram and 300 Meg of HD space left.\n \nI used \"initdb -l /usr/lib/pgsql -r \n/var/lib/pgsql/data -u postgres -d\" to \ninitialyse the database (This is what they say in \nthe manual).\n \nRunning with debug mode on.\ninitdb: using \n/usr/lib/pgsql/local1_template1.bki.source as input to create the template \ndatabase.\ninitdb: using /usr/lib/pgsql/global1.bki.source as \ninput to create the global classes.\ninitdb: using /usr/lib/pgsql/pg_hba.conf.sample as \nthe host-based authentication control file.\nWe are initializing the database system with \nusername postgres (uid=104).\nThis user will own all the files and must also own \nthe server process.\nCreating template database in \n/var/lib/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/var/lib/pgsql/data \n-d template1\ninitdb: could not create template \ndatabase\ninitdb: cleaning up by wiping out \n/var/lib/pgsql/data/base/template1\n \nThis is a \"ls -al /var/lib/pgsql\"\ntotal 4\ndrwx------ 3 postgres postgres 1024 Sep 20 14:09 \n.\ndrwxr-xr-x 15 root root 1024 Sep 17 11:07 \n..\n-rw-r--r-- 1 root root 35 Sep 1 00:55 \nodbcinst.ini\n \nIt's obvious i'm doing something wrong! Can someone \nhelp me?\n \nDavid Godin\ngodind@REMOVE_THIS.voxco.com",
"msg_date": "Mon, 20 Sep 1999 14:59:55 -0400",
"msg_from": "\"David Godin\" <godind@REMOVE_THIS.voxco.com>",
"msg_from_op": true,
"msg_subject": "Installing PostgreSQL"
},
{
"msg_contents": "David Godin <godind@remove_this.voxco.com> wrote:\n\n> Hi all,\n\n> This is the output of initdb after a clean install of all the 6.5.1-2 RPMs\n> I downloaded from ftp.postgreSQL.org. To install them I did an \"rpm -ivh post*\".\n.\n.\n.\n> It's obvious i'm doing something wrong! Can someone help me?\n\n> David Godin\n> godind@REMOVE_THIS.voxco.com\n\nI have installed postgres on a few Debian systems - and have been\nable to test the postgresql database just fine right out of the\nbox. Perhaps you could take the debian package, run it through\nalien and install it instead of your rpm package. Some extra work,\nbut...\n\n-- \nBill Geddes \n [email protected]\n",
"msg_date": "21 Sep 1999 23:11:59 GMT",
"msg_from": "Bill Geddes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Installing PostgreSQL"
}
] |
[
{
"msg_contents": "I have added a doc/TODO.detail directory that contains additional\ninformation about TODO items.\n\nThe TODO list now has references to files in that directory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 16:06:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New TODO detail"
}
] |
[
{
"msg_contents": "Hi,\n\nhow I could create table with datetime field default to 'now'::text in \na way Jan did in his shoes rule example ?\n\nIf I do:\ntest=> create table test ( a datetime default 'now', b int4);\nCREATE\ntest=> insert into test (b) values (1);\nINSERT 1677899 1\ntest=> insert into test (b) values (2);\nINSERT 1677900 1\ntest=> select * from test;\na |b\n----------------------------+-\nTue 21 Sep 01:48:27 1999 MSD|1\nTue 21 Sep 01:48:27 1999 MSD|2\n(2 rows)\n\nI always get datetime of the moment I created the table, but I'd like\nto have datetime of moment I insert. \n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 21 Sep 1999 01:50:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "create table and default 'now' problem ?"
}
] |
[
{
"msg_contents": "RPM's for RedHat are now available at http://www.ramifordistat.net/postgres\n\nSource RPMS are in SRPMS.beta, and i386 binaries are in RPMS.beta. I do still\nconsider these RPM's beta quality -- while the code seems solid and correct for\nthe upgrading, it hasn't been tested on enough machines to say it's past beta\nquailty.\n\nThese RPM's, like their 6.5.1-0.7lo ancestors, respond correctly and sanely to\n'rpm -U'. While it may seem like overkill to release 6.5.1-0.7lo Saturday, and\n6.5.2-0.2lo today, it was necessary. It certainly seems like overkill to\nmaintain both the 6.5.1 and 6.5.2 RPM's concurrently, but it is also necessary\ngiven the situation. Look for 6.5.1-0.8lo, that incorporates the non-6.5.2\nchanges of the 6.5.2-0.2lo rpms) in a few days, unless I get word from RedHat to\ndo otherwise.\n\nBinary RPM's are available now for RedHat 6.0/Intel (kernel 2.2.5, glibc 2.1.1,\negcs 1.1.2). Binary RPM's will be available in a couple of hours for RedHat\n5.2/Intel (kernel 2.0.36, glibc 2.0.7, gcc 2.7.2).\n\n6.5.2-0.2lo contains Ryan Kirkpatrick's packaging of the Alpha patches\noriginally put together by Uncle George. \n\nThere have been a few other changes other than the uprev to 6.5.2. Bang on\nthis one, guys. I'll be interested in seeing Alpha results for these.\n\nAs always, send me mail about any difficulties you find. Please cc: Thomas\nLockhart <[email protected]> on any e-mail to me about these RPM's.\n\nChangelog:\n%changelog\n* Mon Sep 20 1999 Lamar Owen <[email protected]>\n- 6.5.2-0.2lo\n- Upgrade to 6.5.2\n- Add some versioning to the init script -- source is postgresql.init.VERSION\n- Added some intelligence to init script\n- Cleaned up the migration script packaging -- now in a tarball\n- Consolidated some patches\n- Added the JDK 1.2 JDBC jar to the existing JDK1.1 jar.\n- Corrected goof in postgresql package description -- it still referred to the -data\n-- subpackage.\n\n* Sat Sep 18 1999 Lamar Owen <[email protected]>\n- 0.7lo\n- First stab at integrating modified versions of the Debian migration scripts.\n-- Courtesy Oliver Elphick, Debian package maintainer, PostgreSQL Global\n-- Development Group.\n- /usr/lib/pgsql/backup/pg_dumpall_new -- a modifed pg_dumpall used in the\n-- migration -- modified to work with the older executables.\n- /usr/bin/postgresql_dump -- the migration script.\n- Upgrade strategy:\n--\t1.) %pre for main package saves old package's executables\n--\t2.) the postgresql init script in -server detects PGDATA existence\n--\t and version, notifies user if upgrade is necessary\n--\t3.) Rather than fully automating upgrade, the tools are provided:\n--\t a.) /usr/bin/postgresql_dump\n--\t b.) /usr/lib/pgsql/backup/pg_dumpall_new\n--\t c.) The executables backed up by %pre in /usr/lib/pgsql/backup\n--\t4.) Documentation on RPM differences and upgrades in README.rpm\n--\t5.) A fully automatic upgrade can be facilitated by some more code\n--\t in /etc/rc.d/init.d/postgresql, if desired.\n- added documentation for rpm setup, and upgrade (README.rpm)\n- added newer man pages from Thomas Lockhart\n- Put the PL's in the right place -- /usr/lib/pgsql, not /usr/lib. My error.\n- Added Requires: postgresql = %{version} for all sub packages.\n- Need to reorganize sources in next release, as the current number of source\n-- files is a little large.\n\n* Tue Sep 07 1999 Cristian Gafton <[email protected]>\n- upgraded pgaccess to the latest 0.98 stable version\n- fix braindead pgaccess installation and add pgaccess dosucmenattaion to\n the package containing pgaccess rather than main package\n- add missing templates tp the /usr/lib/pgsql directory\n- added back the PostgreSQL howto (I wish people will STOP removing\n documentation from this package!)\n- get rid of the perl handling overkill (what the hell was that needed for?)\n- \"chkconfig --del\" should be done in the server package, not the main\n package\n- make server packeg own only /etc/rc.d/init.d/postgresql, not the whole\n /etc/rc.d (doh!)\n- don't ship OS2 executable client as documenatation...\n- if we have a -tcl subpackage, make sure that other packages don't need tcl\n anymore by moving tcl-dependent binaries in the -tcl package... [pltcl.so]\n- if we are using /sbin/chkconfig we don't need the /etc/rc.d/rc?.d symlinks\n\n* Sat Sep 4 1999 Jeff Johnson <[email protected]>\n- use _arch not (unknown!) buildarch macro (#4913).\n\n* Fri Aug 20 1999 Jeff Johnson <[email protected]>\n- obsolete postgres-clients (not conflicts).\n\n* Thu Aug 19 1999 Jeff Johnson <[email protected]>\n- add to Red Hat 6.1.\n\n* Wed Aug 11 1999 Lamar Owen <[email protected]>\n- Release 3lo\n- Picked up pgaccess README.\n- Built patch set for rpm versus tarball idiosyncrasies:\n-- munged some paths in the regression tests (_OBJWD_), trigger functions\n-- munged USER for regression tests.\n-- Added perl and python examples -- required patching the shebang to drop\n-- local in /usr/local/bin \n- Changed rc.d level from S99 to S75, as there are a few server daemons that\n-- might actually need to load AFTER pgsql -- AOLserver is an example.\n- config.guess included in server package by default -- used by regress tests.\n- Preliminary test subpackage, containing entire src/test tree.\n- Prebuild of binaries in the test subpackage.\n- Added pgaccess-0.97 beta as /usr/bin/pgaccess97 for testing\n- Removed the DATABASE-HOWTO; it was SO old, and the newer release of it\n-- is a stock part of the RedHat HOWTOS package.\n- Put in the RIGHT postgresql.init ('/etc/rc.d/init.d/postgresql')\n- Noted that the perl client is operational.\n\n* Fri Aug 6 1999 Lamar Owen <[email protected]>\n- Release 2lo\n- Added alpha patches courtesy Ryan Kirkpatrick and Uncle George\n- Renamed lamar owen series of RPMS with release of #lo\n- Put Ramifordistat as vendor and URL for lamar owen RPM series, until non-beta\n-- release coordinated with PGDG.\n\n* Mon Jul 19 1999 Lamar Owen <[email protected]>\n- Correct some file misappropriations:\n-- /usr/lib/pgsql was in wrong package\n-- createlang, destroylang, and vacuumdb now in main package\n-- ipcclean now in server subpackage\n-- The static libraries are now in the devel subpackage\n-- /usr/lib/plpgsql.so and /usr/lib/pltcl.so now in server \n- Cleaned up some historical artifacts for readability -- left references\n- to these artifacts in the changelog\n\n* Sat Jun 19 1999 Thomas Lockhart <[email protected]>\n- deprecate clients rpm, and define a server rpm for the backend\n- version 6.5\n- updated pgaccess to version 0.96\n- build ODBC interface library\n- split tcl and ODBC packages into separate binary rpms\n\n* Sat Apr 17 1999 Jeff Johnson <[email protected]>\n- exclude alpha for Red Hat 6.0.\n\n* Sun Mar 21 1999 Cristian Gafton <[email protected]> \n- auto rebuild in the new build environment (release 2)\n\n* Wed Feb 03 1999 Cristian Gafton <[email protected]>\n- version 6.4.2\n- get rid of the -data package (shipping it was a BAD idea)\n\n* Sat Oct 10 1998 Cristian Gafton <[email protected]>\n- strip all binaries\n- use defattr in all packages\n- updated pgaccess to version 0.90\n- /var/lib/pgsql/pg_pwd should not be 666\n\n* Sun Jun 21 1998 Jeff Johnson <[email protected]>\n- create /usr/lib/pgsql (like /usr/include/pgsql)\n- resurrect libpq++.so*\n- fix name problem in startup-script (problem #533)\n\n* Fri Jun 19 1998 Jeff Johnson <[email protected]>\n- configure had \"--prefix=$RPM_BUILD_ROOT/usr\"\n- move all include files below /usr/include/pgsql.\n- resurrect perl client file lists.\n\n* Tue May 05 1998 Prospector System <[email protected]>\n- translations modified for de, fr, tr\n\n* Tue May 05 1998 Cristian Gafton <[email protected]>\n- build on alpha\n\n* Sat May 02 1998 Cristian Gafton <[email protected]>\n- enhanced initscript\n\n* Tue Apr 21 1998 Cristian Gafton <[email protected]>\n- finally v6.3.2 is here !\n\n* Wed Apr 15 1998 Cristian Gafton <[email protected]>\n- added the include files in the devel package\n\n* Wed Apr 01 1998 Cristian Gafton <[email protected]>\n- finally managed to get a patch for 6.3.1 to make it install corectly. Boy,\n what a mess ! ;-(\n\n* Tue Mar 03 1998 Cristian Gafton <[email protected]>\n- upgraded tp 6.3 release\n\n* Sat Feb 28 1998 Cristian Gafton <[email protected]>\n- upgraded to the latest snapshot\n- splitted yet one more subpackage: clients\n\n* Tue Jan 20 1998 Cristian Gafton <[email protected]>\n- the installed devel-library is no longer stripped (duh!)\n- added the 7 patches found on the ftp.postgresql.org site\n- corrected the -rh patch to patch configure.in rather than configure; we\n now use autoconf\n- added a patch to fix the broken psort function\n- build TCL and C++ libraries as well\n- updated pgaccess to version 0.76\n\n* Thu Oct 23 1997 Cristian Gafton <[email protected]>\n- cleaned up the spec file for version 6.2.1\n- splited devel subpackage\n- added chkconfig support in %preun and %post\n- added optional data package\n\n* Mon Oct 13 1997 Elliot Lee <[email protected]> 6.2-3\n- Fixed lots of bung-ups in the spec file, made it FSSTND compliant, etc.\n- Removed jdbc package, jdk isn't stable yet as far as what goes where.\n- Updated to v 6.2.1\n\n* Thu Oct 9 1997 10:58:14 dan\n- on pre-installation script now the `data' dir is renamed to\n `data.rpmorig' (no more wild deletions!).\n- added `postgresql-jdbc' sub-package.\n- postgresql.sh script: defined function `add_to_path()' and\n changed the location of postgresql.jar in the CLASSPATH.\n\n* Sat Oct 4 1997 10:27:43 dan\n- updated to version 6.2.\n- added auto installation's scripts (pre, post, preun, postun)\n\n-----------------------------------------------------------------------------\nLamar Owen \nWGCR Internet Radio \n1 Peter 4:11\n",
"msg_date": "Mon, 20 Sep 1999 18:35:57 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.5.2 RPMS available."
}
] |
[
{
"msg_contents": "Hi,\n\nI got a few reports from users that postmaster disappears for unknown\nreason. Inspecting the postmaster log, I found that postmaster exited at:\n\nif (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n\t\t (struct timeval *) NULL) < 0)\n{\n\tif (errno == EINTR)\n\t\tcontinue;\n\tfprintf(stderr, \"%s: ServerLoop: select failed: %s\\n\",\n\t\t\tprogname, strerror(errno));\n\treturn STATUS_ERROR; <-- here\n}\n\nIn this case errno=ECHILD has been returned that makes postmaster\nexiting. This could happen if SIGCHLD raised between select() call and\nthe next if (errno=...) statement. One of the solution would be\nignoring ECHILD as well as EINTR. Included are patches for this. If\nthere's no objection, I will commit them to both stable and current\ntree.\n\n*** postgresql-6.5.1/src/backend/postmaster/postmaster.c~\tThu Jul 8 02:17:48 1999\n--- postgresql-6.5.1/src/backend/postmaster/postmaster.c\tThu Sep 9 10:14:30 1999\n***************\n*** 709,719 ****\n \t\tif (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n! \t\t\tif (errno == EINTR)\n \t\t\t\tcontinue;\n! \t\t\tfprintf(stderr, \"%s: ServerLoop: select failed: %s\\n\",\n \t\t\t\t\tprogname, strerror(errno));\n! \t\t\treturn STATUS_ERROR;\n \t\t}\n \n \t\t/*\n--- 709,729 ----\n \t\tif (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n! \t\t\tswitch(errno) {\n! \t\t\tcase EINTR:\n \t\t\t\tcontinue;\n! \t\t\t\tbreak;\n! \t\t\tcase ECHILD:\n! \t\t\t\tfprintf(stderr, \"%s: ServerLoop: ignoring ECHILD\\n\",\n! \t\t\t\t\tprogname);\n! \t\t\t\tcontinue;\n! \t\t\t\tbreak;\n! \t\t\tdefault:\n! \t\t\t\tfprintf(stderr, \"%s: ServerLoop: select failed: %s\\n\",\n \t\t\t\t\tprogname, strerror(errno));\n! \t\t\t\treturn STATUS_ERROR;\n! \t\t\t\tbreak;\n! \t\t\t}\n \t\t}\n \n \t\t/*\n",
"msg_date": "Tue, 21 Sep 1999 13:26:40 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "postmaster disappears"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> In this case errno=ECHILD has been returned that makes postmaster\n> exiting. This could happen if SIGCHLD raised between select() call and\n> the next if (errno=...) statement. One of the solution would be\n> ignoring ECHILD as well as EINTR. Included are patches for this.\n\nHmm. What you are saying, I guess, is that SIGCHLD is raised,\nreaper() executes, and then when control continues in the main loop\nthe errno left over from reaper()'s last kernel call is what's seen,\ninstead of the one returned by signal().\n\nSeems to me that the correct fix is to have reaper() save and restore\nthe outer value of errno, not to hack the main line to ignore the\nmost probable state left over from reaper(). Otherwise you still choke\nif some other value gets returned from whatever call reaper() does last.\nMoreover, you're not actually checking what the select() did unless\nyou do it that way.\n\nCurious that this sort of problem is not seen more often --- I wonder\nif most Unixes arrange to save/restore errno around a signal handler\nfor you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 09:59:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster disappears "
},
{
"msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> In this case errno=ECHILD has been returned that makes postmaster\n>> exiting. This could happen if SIGCHLD raised between select() call and\n>> the next if (errno=...) statement. One of the solution would be\n>> ignoring ECHILD as well as EINTR. Included are patches for this.\n>\n>Hmm. What you are saying, I guess, is that SIGCHLD is raised,\n>reaper() executes, and then when control continues in the main loop\n>the errno left over from reaper()'s last kernel call is what's seen,\n>instead of the one returned by signal().\n\nRight.\n\n>Seems to me that the correct fix is to have reaper() save and restore\n>the outer value of errno, not to hack the main line to ignore the\n>most probable state left over from reaper(). Otherwise you still choke\n>if some other value gets returned from whatever call reaper() does\n>last.\n\nNot sure. reaper() may be called while reaper() is executing if a new\nSIGCHLD is raised. How do you handle this case?\n\n>Moreover, you're not actually checking what the select() did unless\n>you do it that way.\n\nSorry, I don't understand this. Can you explain, please?\n\n>Curious that this sort of problem is not seen more often --- I wonder\n>if most Unixes arrange to save/restore errno around a signal handler\n>for you?\n\nMaybe because the situation I have pointed out is relatively rare.\n---\nTatsuo Ishii\n\n",
"msg_date": "Wed, 22 Sep 1999 00:27:55 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster disappears "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Seems to me that the correct fix is to have reaper() save and restore\n>> the outer value of errno, not to hack the main line to ignore the\n>> most probable state left over from reaper(). Otherwise you still choke\n>> if some other value gets returned from whatever call reaper() does\n>> last.\n\n> Not sure. reaper() may be called while reaper() is executing if a new\n> SIGCHLD is raised. How do you handle this case?\n\nNo, because the signal is disabled when the trap is taken, and then not\nre-enabled until reaper() does pqsignal() just before exiting. We don't\nreally care if a new signal recursively interrupts reaper() at that\npoint, but bad things would happen if there were a recursive interrupt\nearlier while we were diddling the list of children. (Cf. comments in\nthe existing code where SIGCHLD is disabled while we add a child...)\n\nIn any case, it's not a problem: if each level of reaper does\n\n\treaper()\n\t{\n\t\tint save_errno = errno;\n\n\t\t...\n\n\t\terrno = save_errno; /* restore errno of interrupted code */\n\t}\n\nthen a recursive interrupt might be saving/restoring the errno value\nthat existed in the next outer interrupt, rather than the value that\nis truly at the outer level, but that's what stacks are for ;-)\n\n>> Moreover, you're not actually checking what the select() did unless\n>> you do it that way.\n\n> Sorry, I don't understand this. Can you explain, please?\n\nIf you don't have the signal routine save/restore errno, then (when this\nproblem occurs) you are not seeing the errno returned by the select(),\nbut one left over from reaper()'s activity. If the select() failed, you\nwon't know it.\n\n>> Curious that this sort of problem is not seen more often --- I wonder\n>> if most Unixes arrange to save/restore errno around a signal handler\n>> for you?\n\n> Maybe because the situation I have pointed out is relatively rare.\n\nWell, the window for trouble is awfully tiny in this particular code of\nours, but it might be larger in other programs. Yet I don't think I've\never heard a programming recommendation to save/restore errno in signal\nhandlers...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 21:07:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster disappears "
},
{
"msg_contents": ">> Not sure. reaper() may be called while reaper() is executing if a new\n>> SIGCHLD is raised. How do you handle this case?\n>\n>No, because the signal is disabled when the trap is taken, and then not\n>re-enabled until reaper() does pqsignal() just before exiting. We don't\n\nYou are correct. I had wrong impression about signal handling.\n\n>>> Moreover, you're not actually checking what the select() did unless\n>>> you do it that way.\n>\n>> Sorry, I don't understand this. Can you explain, please?\n>\n>If you don't have the signal routine save/restore errno, then (when this\n>problem occurs) you are not seeing the errno returned by the select(),\n>but one left over from reaper()'s activity. If the select() failed, you\n>won't know it.\n\nOh, I see your point.\n\n>>> Curious that this sort of problem is not seen more often --- I wonder\n>>> if most Unixes arrange to save/restore errno around a signal handler\n>>> for you?\n>\n>> Maybe because the situation I have pointed out is relatively rare.\n>\n>Well, the window for trouble is awfully tiny in this particular code of\n>ours, but it might be larger in other programs.\n\nThough it seems rare, we certainly have had this kind of reports from\nusers for a while. Since disappearing postmaster is a really bad\nthing, I love to see solutions for this.\n\n>Yet I don't think I've\n>ever heard a programming recommendation to save/restore errno in signal\n>handlers...\n\nAgreed. I don't like this way.\n\nI asked a Unix guru, and got a suggestion that we do not need to call\nwait() (and CleanupProc()) inside the signal handler. Instead we could\nhave a null signal hander (it just calls pqsignal()) for SIGCHLD. If\nselect() returns EINTR then we just call wait() and\nCleanupProc(). Moreover this would eliminate sigprocmask() or\nsigblock() calls currently done to avoid race conditions before going\ninto the critical region. Of course we have to call wait() and\nCleanupProc() before select() to make sure that we have no waiting\nchildren.\n\nAnother way would be blocking SIGCHILD before calling select(). In\nthis case appropriate time out setting for select() is necessary,\nthough.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 22 Sep 1999 13:49:09 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] postmaster disappears "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Yet I don't think I've ever heard a programming recommendation to\n>> save/restore errno in signal handlers...\n\n> Agreed. I don't like this way.\n\nHmm, I don't like your patch and you don't like mine. Time to redesign\nrather than patch ;-)\n\n> I asked a Unix guru, and got a suggestion that we do not need to call\n> wait() (and CleanupProc()) inside the signal handler. Instead we could\n> have a null signal hander (it just calls pqsignal()) for SIGCHLD. If\n> select() returns EINTR then we just call wait() and\n> CleanupProc(). Moreover this would eliminate sigprocmask() or\n> sigblock() calls currently done to avoid race conditions before going\n> into the critical region. Of course we have to call wait() and\n> CleanupProc() before select() to make sure that we have no waiting\n> children.\n\nThis looks like it could be a really clean solution. In fact, there'd\nbe no need to check for EINTR from select(); we could just fall through,\nknowing that the reaping will be done as soon as we loop around to the\ntop of the loop. The code becomes just\n\n\tfor (;;) {\n\t\treap;\n\t\tselect;\n\t\thandle any input found by select;\n\t}\n\nDo we even need a signal handler at all for ECHILD? I suppose the\nselect might not get interrupted (at least on some platforms) if there\nisn't one.\n\nActually I guess there still is a race condition: there is a window\nbetween the last wait() of the reap loop and the select() wherein an\nECHILD won't be serviced right away, because we hit the select() before\nnoticing it. We could maybe use a timeout on the select to fix that.\nDon't really like it though, since the timeout couldn't be very long,\nbut we don't want the postmaster wasting cycles when there's nothing\nto do. Is there another way around this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 10:38:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster disappears "
},
{
"msg_contents": "> Do we even need a signal handler at all for ECHILD? I suppose the\n> select might not get interrupted (at least on some platforms) if there\n> isn't one.\n> \n> Actually I guess there still is a race condition: there is a window\n> between the last wait() of the reap loop and the select() wherein an\n> ECHILD won't be serviced right away, because we hit the select() before\n> noticing it. We could maybe use a timeout on the select to fix that.\n> Don't really like it though, since the timeout couldn't be very long,\n> but we don't want the postmaster wasting cycles when there's nothing\n> to do. Is there another way around this?\n\n\nHere is code I use for reaping dead child processes. Under SysV, if you\nsay you want to ignore child processes, they just disappear, but on BSD,\nthe children stay as zombies. This fixes that.\n\nSeems you need to define a singnal handler, and just put select() in a\nloop:\n\n\twhile (1)\n\t\tif (select(...) != -1 || errno != EINTR)\n\t\t\tbreak;\n\nI see you are are loosing your error inside the singnal handler. Seems\nyou may have to save/restore errno.\n\n---------------------------------------------------------------------------\n\n/*\n *\tFrom: [email protected] (W. Richard Stevens) \n *\tNewsgroups: comp.unix.bsd.misc,comp.unix.bsd.bsdi.misc\n *\tSubject: Re: BSD 4.4: Preventing zombies SIGCHLD\n *\tDate: 19 Dec 1995 13:24:39 GMT\n */\n\nvoid\nreapchild(int signo)\n{\n pid_t pid;\n int stat;\n \n while ( (pid = waitpid(-1, &stat, WNOHANG)) > 0) {\n /* handle \"pid\" and \"stat\" */\n }\n if (pid < 0)\n \t;/* error */\n\n\t/* we are done playing the current sound */\n\tcur_sound_id = -1;\n \n /* return value is 0 if no more children */\n return;\n}\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Sep 1999 12:39:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postmaster disappears"
}
] |
[
{
"msg_contents": "One way around this bug is to create a SQL function\nwhich returns now() and use it as the default value:\n\n1. create function mynow() returns datetime\n as 'SELECT now()::datetime' LANGUAGE 'SQL';\n\n2. create table test (a datetime default mynow(), b \nint4);\n\nNow things should work:\n\ninsert into test (b) values (1);\ninsert into test (b) values (2);\n\nselect * from test;\na |b\n----------------------------+-\nTue Sep 21 01:05:02 1999 EDT|1\nTue Sep 21 01:05:08 1999 EDT|2\n(2 rows) \n\nHope this helps, \n\nMike Mascari\n([email protected])\n\n--- Oleg Bartunov <[email protected]> wrote:\n> Hi,\n> \n> how I could create table with datetime field default\n> to 'now'::text in \n> a way Jan did in his shoes rule example ?\n> \n> If I do:\n> test=> create table test ( a datetime default 'now',\n> b int4);\n> CREATE\n> test=> insert into test (b) values (1);\n> INSERT 1677899 1\n> test=> insert into test (b) values (2);\n> INSERT 1677900 1\n> test=> select * from test;\n> a |b\n> ----------------------------+-\n> Tue 21 Sep 01:48:27 1999 MSD|1\n> Tue 21 Sep 01:48:27 1999 MSD|2\n> (2 rows)\n> \n> I always get datetime of the moment I created the\n> table, but I'd like\n> to have datetime of moment I insert. \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n",
"msg_date": "Mon, 20 Sep 1999 22:14:33 -0700 (PDT)",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Mike Mascari wrote:\n\n> One way around this bug is to create a SQL function\n> which returns now() and use it as the default value:\n> \n> 1. create function mynow() returns datetime\n> as 'SELECT now()::datetime' LANGUAGE 'SQL';\n> \n> 2. create table test (a datetime default mynow(), b \n> int4);\n> \n> Now things should work:\n> \n> insert into test (b) values (1);\n> insert into test (b) values (2);\n> \n> select * from test;\n> a |b\n> ----------------------------+-\n> Tue Sep 21 01:05:02 1999 EDT|1\n> Tue Sep 21 01:05:08 1999 EDT|2\n> (2 rows) \n> \n> Hope this helps, \n\nWhy the 'create function'?\n\nhardware=> create table test_table ( a int4, ts datetime default 'now' );\nCREATE\nhardware=> insert into test_table values ( 1 ) ;\nINSERT 115445 1\nhardware=> select * from test_table;\na|ts \n-+----------------------------\n1|Tue Sep 21 02:00:50 1999 EDT\n(1 row)\n\n\n> \n> Mike Mascari\n> ([email protected])\n> \n> --- Oleg Bartunov <[email protected]> wrote:\n> > Hi,\n> > \n> > how I could create table with datetime field default\n> > to 'now'::text in \n> > a way Jan did in his shoes rule example ?\n> > \n> > If I do:\n> > test=> create table test ( a datetime default 'now',\n> > b int4);\n> > CREATE\n> > test=> insert into test (b) values (1);\n> > INSERT 1677899 1\n> > test=> insert into test (b) values (2);\n> > INSERT 1677900 1\n> > test=> select * from test;\n> > a |b\n> > ----------------------------+-\n> > Tue 21 Sep 01:48:27 1999 MSD|1\n> > Tue 21 Sep 01:48:27 1999 MSD|2\n> > (2 rows)\n> > \n> > I always get datetime of the moment I created the\n> > table, but I'd like\n> > to have datetime of moment I insert. \n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 03:01:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": "\nIgnore last...I hadn't clued into the 'same time as table created' part of\nhis message...\n\nThomas...is that not a 'bug' with the datetime/timestamp handling of\nDEFAULT? *raised eyebrow*\n\n\n\nOn Mon, 20 Sep 1999, Mike Mascari wrote:\n\n> One way around this bug is to create a SQL function\n> which returns now() and use it as the default value:\n> \n> 1. create function mynow() returns datetime\n> as 'SELECT now()::datetime' LANGUAGE 'SQL';\n> \n> 2. create table test (a datetime default mynow(), b \n> int4);\n> \n> Now things should work:\n> \n> insert into test (b) values (1);\n> insert into test (b) values (2);\n> \n> select * from test;\n> a |b\n> ----------------------------+-\n> Tue Sep 21 01:05:02 1999 EDT|1\n> Tue Sep 21 01:05:08 1999 EDT|2\n> (2 rows) \n> \n> Hope this helps, \n> \n> Mike Mascari\n> ([email protected])\n> \n> --- Oleg Bartunov <[email protected]> wrote:\n> > Hi,\n> > \n> > how I could create table with datetime field default\n> > to 'now'::text in \n> > a way Jan did in his shoes rule example ?\n> > \n> > If I do:\n> > test=> create table test ( a datetime default 'now',\n> > b int4);\n> > CREATE\n> > test=> insert into test (b) values (1);\n> > INSERT 1677899 1\n> > test=> insert into test (b) values (2);\n> > INSERT 1677900 1\n> > test=> select * from test;\n> > a |b\n> > ----------------------------+-\n> > Tue 21 Sep 01:48:27 1999 MSD|1\n> > Tue 21 Sep 01:48:27 1999 MSD|2\n> > (2 rows)\n> > \n> > I always get datetime of the moment I created the\n> > table, but I'd like\n> > to have datetime of moment I insert. \n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> __________________________________________________\n> Do You Yahoo!?\n> Bid and sell for free at http://auctions.yahoo.com\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 03:03:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": "> > how I could create table with datetime field default\n> > to 'now'::text in a way Jan did in his shoes rule example ?\n> > If I do:\n> > test=> create table test ( a datetime default 'now',\n> > b int4);\n> > CREATE\n> > I always get datetime of the moment I created the\n> > table, but I'd like to have datetime of moment I insert.\n> One way around this bug is to create a SQL function\n> which returns now() and use it as the default value:\n\nNot necessary, though this does work well. A simpler way is to\nactually do what Oleg asks about:\n\n create table test ( a datetime default text 'now',...)\n\nor\n\n create table test ( a datetime default 'now'::text,...)\n\nwhich should force the string to *stay* as a string, rather than\ngetting converted to a date value when the table is created. Once it\nis forced to be a string, then it will be converted at insert time\ninstead.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 21 Sep 1999 06:14:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": "> Why the 'create function'?\n> hardware=> insert into test_table values ( 1 ) ;\n> hardware=> select * from test_table;\n> 1|Tue Sep 21 02:00:50 1999 EDT\n\nRight. And if you run the insert again, you'll see the exact same time\ninserted. But if you force 'now' to be a true string type (rather than\nleaving it unspecified) then the evaluation will happen at insert\ntime.\n\nThe behavior is \"correct\" for most values of most types, but falls\ndown when a seemingly constant value, like a fixed string N-O-W,\nactually is not a constant but rather something which changes value\ndepending on when the query runs. In the long run, we need to have a\nnew attribute associated with data types which tells whether constants\nhave that nature (most won't). In the meantime, this is a feature, and\nhas been since Vadim (?) implemented DEFAULT ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 21 Sep 1999 06:28:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Thomas Lockhart wrote:\n\n> Date: Tue, 21 Sep 1999 06:14:12 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: Mike Mascari <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] create table and default 'now' problem ?\n> \n> > > how I could create table with datetime field default\n> > > to 'now'::text in a way Jan did in his shoes rule example ?\n> > > If I do:\n> > > test=> create table test ( a datetime default 'now',\n> > > b int4);\n> > > CREATE\n> > > I always get datetime of the moment I created the\n> > > table, but I'd like to have datetime of moment I insert.\n> > One way around this bug is to create a SQL function\n> > which returns now() and use it as the default value:\n> \n> Not necessary, though this does work well. A simpler way is to\n> actually do what Oleg asks about:\n> \n> create table test ( a datetime default text 'now',...)\n> \n\nThis works ! Thanks \n\n> or\n> \n> create table test ( a datetime default 'now'::text,...)\n\nParser complains:\nERROR: parser: parse error at or near \"'\"\n\nDoes this considered as a bug or feature ?\n\n\n\tOleg\n\n> \n> which should force the string to *stay* as a string, rather than\n> getting converted to a date value when the table is created. Once it\n> is forced to be a string, then it will be converted at insert time\n> instead.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 21 Sep 1999 16:24:31 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ?"
},
{
"msg_contents": ">>>>>> how I could create table with datetime field default\n>>>>>> to 'now'::text in a way Jan did in his shoes rule example ?\n\nA couple of comments on this thread:\n\n1. Seems to me that the easy, reliable way is just to use the\nnow() function --- you don't have to make one, it's built in:\n\n\tcreate table test ( a datetime default now(), b int);\n\nThis avoids all the issues about when constants get coerced, and\nprobably ought to be what we recommend to newbies. However,\nthis is certainly a workaround for an existing bug.\n\n2. I believe that most of the problem with premature constant coercion\nin default values is coming from the bizarre way that default values get\nentered into the database. StoreAttrDefault essentially converts the\nparsed default-value tree back to text, constructs a SELECT statement\nusing the text, parses that, and examines the resulting parsetree.\nYech. If it were done carefully it might work, but it's not; the\nreverse parser does not do quoting carefully, does not do type coercion\ncarefully, and fails to handle large parts of the expression syntax at\nall. (I've ranted about this before ... check the pghackers archives.)\n\nI have a to-do list item to rip all that code out and do it over again\nright. Might or might not get to it for 6.6 --- does someone else want\nto tackle it?\n\n3. Yes, this is a bug too:\n\n>> create table test ( a datetime default 'now'::text,...)\n> Parser complains:\n> ERROR: parser: parse error at or near \"'\"\n> Does this considered as a bug or feature ?\n\nSee above --- reverse-parsing of this construct is wrong. I have\nno intention of fixing the reverse parser; I want to get rid of it\nentirely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 09:40:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ? "
},
{
"msg_contents": "Thank you Tom for explanation. It's not very bothered me as far as I have\nmany workarounds suggested in mailing list. But I wondering because\n'now'::text works as expected when I create view\n\ncreate view www_auth as select a.account as user_name, a.password, b.nick as \n group_name\n from users a, resources b, privilege_user_map c\n where a.auth_id = c.auth_id and b.res_id = c.res_id and \n (a.account_valid_until is null or \n a.account_valid_until > datetime('now'::text)) and \n c.perm_id = 1;\n\n\tRegards,\n\t\tOleg\n\nOn Tue, 21 Sep 1999, Tom Lane wrote:\n\n> Date: Tue, 21 Sep 1999 09:40:40 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Thomas Lockhart <[email protected]>,\n> [email protected]\n> Subject: Re: [HACKERS] create table and default 'now' problem ? \n> \n> >>>>>> how I could create table with datetime field default\n> >>>>>> to 'now'::text in a way Jan did in his shoes rule example ?\n> \n> A couple of comments on this thread:\n> \n> 1. Seems to me that the easy, reliable way is just to use the\n> now() function --- you don't have to make one, it's built in:\n> \n> \tcreate table test ( a datetime default now(), b int);\n> \n> This avoids all the issues about when constants get coerced, and\n> probably ought to be what we recommend to newbies. However,\n> this is certainly a workaround for an existing bug.\n> \n> 2. I believe that most of the problem with premature constant coercion\n> in default values is coming from the bizarre way that default values get\n> entered into the database. StoreAttrDefault essentially converts the\n> parsed default-value tree back to text, constructs a SELECT statement\n> using the text, parses that, and examines the resulting parsetree.\n> Yech. If it were done carefully it might work, but it's not; the\n> reverse parser does not do quoting carefully, does not do type coercion\n> carefully, and fails to handle large parts of the expression syntax at\n> all. (I've ranted about this before ... check the pghackers archives.)\n> \n> I have a to-do list item to rip all that code out and do it over again\n> right. Might or might not get to it for 6.6 --- does someone else want\n> to tackle it?\n> \n> 3. Yes, this is a bug too:\n> \n> >> create table test ( a datetime default 'now'::text,...)\n> > Parser complains:\n> > ERROR: parser: parse error at or near \"'\"\n> > Does this considered as a bug or feature ?\n> \n> See above --- reverse-parsing of this construct is wrong. I have\n> no intention of fixing the reverse parser; I want to get rid of it\n> entirely.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 21 Sep 1999 18:09:34 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ? "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Thank you Tom for explanation. It's not very bothered me as far as I have\n> many workarounds suggested in mailing list. But I wondering because\n> 'now'::text works as expected when I create view\n\nYes, it's just the context of a DEFAULT expression that has these\nproblems. (Actually, it looks like constraints --- CHECK() expressions\n--- are handled in the same bogus way, but we don't seem to get as many\ngripes about them...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 10:25:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create table and default 'now' problem ? "
}
] |
[
{
"msg_contents": ">\n> Hi , Jan\n>\n> my name is Max .\n\nHi Max,\n\n>\n> I have contributed to SPI interface ,\n> that with external Trigger try to make\n> a referential integrity.\n>\n> If I can Help , in something ,\n> I'm here .\n>\n\n You're welcome.\n\n I've CC'd the hackers list because we might get some ideas\n from there too (and to surface once in a while - Bruce\n already missed me).\n\n Currently I'm very busy for serious work so I don't find\n enough spare time to start on such a big change to\n PostgreSQL. But I'd like to give you an overview of what I\n have in mind so far so you can decide if you're able to help.\n\n Referential integrity (RI) is based on constraints defined in\n the schema of a database. There are some different types of\n constraints:\n\n 1. Uniqueness constraints.\n\n 2. Foreign key constraints that ensure that a key value used\n in an attribute exists in another relation. One\n constraint must ensure you're unable to INSERT/UPDATE to\n a value that doesn't exist, another one must prevent\n DELETE on a referenced key item or that it is changed\n during UPDATE.\n\n 3. Cascading deletes that let rows referring to a key follow\n on DELETE silently.\n\n Even if not defined in the standard (AFAIK) there could be\n others like letting references automatically follow on UPDATE\n to a key value.\n\n All constraints can be enabled and/or default to be deferred.\n That means, that the RI checks aren't performed when they are\n triggerd. Instead, they're checked at transaction end or if\n explicitly invoked by some special statement. This is really\n important because someone must be able to setup cyclic RI\n checks that could never be satisfied if the checks would be\n performed immediately. The major problem on this is the\n amount of data affected until the checks must be performed.\n The number of statements executed, that trigger such deferred\n constraints, shouldn't be limited. And one single\n INSERT/UPDATE/DELETE could affect thousands of rows.\n\n Due to these problems I thought, it might not be such a good\n idea to remember CTID's or the like to get back OLD/NEW rows\n at the time the constraints are checked. Instead I planned to\n misuse the rule system for it. Unfortunately, the rule system\n has damned tricky problems itself when it comes to having-,\n distinct and other clauses and extremely on aggregates and\n subselects. These problems would have to get fixed first. So\n it's a solution that cannot be implemented right now.\n\n Fallback to CTID remembering though. There are problems too\n :-(. Let's enhance the trigger mechanism with a deferred\n feature. First this requires two additional bool attributes\n in the pg_trigger relation that tell if this trigger is\n deferrable and if it is deferred by default. While at it we\n should add another bool that tells if the trigger is enabled\n (ALTER TRIGGER {ENABLE|DISABLE} trigger).\n\n Second we need an internal list of triggers, that are\n currently DEFINED AS DEFERRED. Either because they default to\n it, or the user explicitly asked to deferr it.\n\n Third we need an internal list of triggers that must be\n invoked later because at the time an event occured where they\n should have been triggered, they appeared in the other list\n and their execution is delayed until transaction end or\n explicit execution. This list must remember the OID of the\n trigger to invoke (to identify the procedure and the\n arguments), the relation that caused the trigger and the\n CTID's of the OLD and NEW row.\n\n That last list could grow extremely! Think of a trigger\n that's executing commands over SPI which in turn activate\n deferred triggers. Since the order of trigger execution is\n very important for RI, I can't see any chance to\n simplify/condense this information. Thus it is 16 bytes at\n least per deferred trigger call (2 OID's plus 2 CTID's). I\n think one or more temp files would fit best for this.\n\n A last tricky point is if one of a bunch of deferred triggers\n is explicitly called for execution. At this time, the entries\n for it in the temp file(s) must get processed and marked\n executed (maybe by overwriting the triggers OID with the\n invalid OID) while other trigger events still have to get\n recorded.\n\n Needless to say that reading thousands of those entries just\n to find a few isn't good on performance. But better have this\n special case slow that dealing with hundreds of temp files or\n other overhead slowing down the usual case where ALL deferred\n triggers get called at transaction end.\n\n Trigger invocation is simple now - fetch the OLD and NEW rows\n by CTID and execute the trigger as done by the trigger\n manager. Oh - well - vacuum shouldn't touch relations where\n deferred triggers are outstanding. Might require some\n special lock entry - Vadim?\n\n Did I miss something?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n",
"msg_date": "Tue, 21 Sep 1999 10:37:21 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Third we need an internal list of triggers that must be\n> invoked later because at the time an event occured where they\n> should have been triggered, they appeared in the other list\n> and their execution is delayed until transaction end or\n> explicit execution. This list must remember the OID of the\n> trigger to invoke (to identify the procedure and the\n> arguments), the relation that caused the trigger and the\n> CTID's of the OLD and NEW row.\n> \n> That last list could grow extremely! Think of a trigger\n> that's executing commands over SPI which in turn activate\n> deferred triggers. Since the order of trigger execution is\n> very important for RI, I can't see any chance to\n> simplify/condense this information. Thus it is 16 bytes at\n> least per deferred trigger call (2 OID's plus 2 CTID's). I\n> think one or more temp files would fit best for this.\n> \n> A last tricky point is if one of a bunch of deferred triggers\n> is explicitly called for execution. At this time, the entries\n> for it in the temp file(s) must get processed and marked\n> executed (maybe by overwriting the triggers OID with the\n> invalid OID) while other trigger events still have to get\n> recorded.\n\nI believe that things are much simpler.\nFor each deferable constraint (trigger) we have to remember\nthe LastCommandIdProccessedByConstraint. When the mode of\na constraint changes from defered to immediate (SET CONSTRAINT MODE), \nmodified tuple will be fetched from WAL from down to up until\ntuple modified by LastCommandIdProccessedByConstraint is fetched\nand this is show stopper. Now we remember CommandId of \nSET CONSTRAINT MODE as new LastCommandIdProccessedByConstraint.\nWhen LastCommandIdProccessedByConstraint is changed by\nSET CONSTRAINT MODE DEFERRED we remeber this in flag to\nupdate LastCommandIdProccessedByConstraint later with higher \nCommandId of first modification of triggered table (to reduce \namount of data to read from WAL).\n\n?\n\nVadim\n",
"msg_date": "Tue, 21 Sep 1999 23:15:02 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Vadim wrote:\n\n> I believe that things are much simpler.\n> For each deferable constraint (trigger) we have to remember\n> the LastCommandIdProccessedByConstraint. When the mode of\n> a constraint changes from defered to immediate (SET CONSTRAINT MODE),\n> modified tuple will be fetched from WAL from down to up until\n> tuple modified by LastCommandIdProccessedByConstraint is fetched\n> and this is show stopper. Now we remember CommandId of\n> SET CONSTRAINT MODE as new LastCommandIdProccessedByConstraint.\n> When LastCommandIdProccessedByConstraint is changed by\n> SET CONSTRAINT MODE DEFERRED we remeber this in flag to\n> update LastCommandIdProccessedByConstraint later with higher\n> CommandId of first modification of triggered table (to reduce\n> amount of data to read from WAL).\n\nHmmm,\n\n I'm not sure what side effects it could have if the triggers\n at the time of\n\n SET CONSTRAINTS c1, c2 IMMEDIATE\n\n arent fired in the same order they have been recorded - must\n think about that for a while. In that case I must be able to\n scan WAL from one command ID until another regardless of the\n resultrelation. Is that possible?\n\n Another issue is this: isn't it possible to run a database\n (or maybe an entire installation) without WAL? Does it make\n the code better maintainable to have WAL and RI coupled that\n strong?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 17:28:51 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> I'm not sure what side effects it could have if the triggers\n> at the time of\n> \n> SET CONSTRAINTS c1, c2 IMMEDIATE\n> \n> arent fired in the same order they have been recorded - must\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nDid you mean - in the same order as tables were modified?\n\n> think about that for a while. In that case I must be able to\n> scan WAL from one command ID until another regardless of the\n> resultrelation. Is that possible?\n\nWAL records are in the same order as tuples (regardless of\nresult relation) was modified. Example: UPDATE of T1\nfires (immediate) after row trigger inserting tuple\ninto T2. WAL records: \n--> up\n{old_T1_tuple version ID, new_T1_tuple version ID and values}\n{new_T2_tuple ID and values}\n...\nT1 update record\nT2 insert record\n...\n--> down\n\nBut records will be fetched from WAL in reverse order, from\ndown to up.\n\nDoes it matter?\nOrder of modifications made by UPDATE/DELETE is undefined.\nThough, order has some sence for INSERT ... SELECT ORDER BY -:)\nNevertheless, I don't see in standard anything about order\nof constraint checks.\n\nBTW, I found what standard means by \"immediate\":\n---\n The checking of a constraint depends on its constraint mode within\n the current SQL-transaction. If the constraint mode is immediate,\n| then the constraint is effectively checked at the end of each\n ^^^^^^^^^^^^^^^^^^\n| ___________________________________________________________________\n| ANSI Only-SQL3\n| ___________________________________________________________________\n| SQL-statement S, unless S is executed because it is a <triggered\n ^^^^^^^^^^^^^^^\n| SQL statement>, in which case, the constraint is effectively\n| checked at the end of the SQL-statement that is the root cause\n| of S.\n---\n\nAnd now about triggers (regardless of ROW or STATEMENT level!):\n---\n 4.22.2 Execution of triggered actions\n\n The execution of triggered actions depends on the cursor mode of \n the current SQL-transaction. If the cursor mode is set to cascade\n off, then the execution of the <triggered SQL statement>s is effec-\n tively deferred until enacted implicitly be execution of a <commit \n statement> or a <close statement>. Otherwise, the <triggered SQL\n statement>s are effectively executed either before or after the\n ^^^^^^^^^^^^^^^^^^^\n execution of each SQL-statement, as determined by the specified\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n <trigger action time>.\n---\n\nHm. \n\n> Another issue is this: isn't it possible to run a database\n> (or maybe an entire installation) without WAL? Does it make\n\nDo you worry about disk space? -:)\nWith archive mode off only log segments (currently, 64M each)\nrequired by active transactions (which made some changes)\nwill present on disk.\n\n> the code better maintainable to have WAL and RI coupled that\n> strong?\n\nThis doesn't add any complexity to WAL manager.\n\nVadim\n",
"msg_date": "Wed, 22 Sep 1999 03:10:29 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Vadim wrote:\n\n> But records will be fetched from WAL in reverse order, from\n> down to up.\n>\n> Does it matter?\n\n Might require to teach the WAL-manager to do it top-down too.\n And even then it might be better on performance to scan my\n constraint-log for events to the same tuple. It has records\n of a fixed, very small size and fetching tuples by CTID from\n the heap (direct block access) is required anyway because for\n delayed trigger invocation I neen OLD values too - and that's\n not in WAL if I read it right.\n\n But as I said I'd like to leave that coupling for later.\n\n> BTW, I found what standard means by \"immediate\":\n> ---\n> The checking of a constraint depends on its constraint mode within\n> the current SQL-transaction. If the constraint mode is immediate,\n> | then the constraint is effectively checked at the end of each\n> ^^^^^^^^^^^^^^^^^^\n> | ___________________________________________________________________\n> | ANSI Only-SQL3\n> | ___________________________________________________________________\n> | SQL-statement S, unless S is executed because it is a <triggered\n> ^^^^^^^^^^^^^^^\n> | SQL statement>, in which case, the constraint is effectively\n> | checked at the end of the SQL-statement that is the root cause\n> | of S.\n> ---\n\n Ah - so ALL constraint-triggers must be AFTER <event> and\n deferred at least until the end of the USER-query.\n\n>\n> And now about triggers (regardless of ROW or STATEMENT level!):\n> ---\n> 4.22.2 Execution of triggered actions\n>\n> The execution of triggered actions depends on the cursor mode of\n> the current SQL-transaction. If the cursor mode is set to cascade\n> off, then the execution of the <triggered SQL statement>s is effec-\n> tively deferred until enacted implicitly be execution of a <commit\n> statement> or a <close statement>. Otherwise, the <triggered SQL\n> statement>s are effectively executed either before or after the\n> ^^^^^^^^^^^^^^^^^^^\n> execution of each SQL-statement, as determined by the specified\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> <trigger action time>.\n> ---\n\n We do not have FOR UPDATE cursors. So (even if to be kept in\n mind) there is no CURSOR mode to care for right now.\n\n Changing BEFORE triggers to behave exactly like that would\n require to do the execution of the plan twice, one time to\n fire triggers, another time to perform the action itself. I\n don't think that the perfomance cost is worth this little\n amount of accuracy. Such a little difference should be\n mentioned in the product notes and period.\n\n AFTER triggers could simply be treated half like IMMEDIATE\n constraints - deferred until the end of a single statement\n (not user-query). So there are four times where the deferred\n trigger queue is run (maybe partially). At the end of a\n statement, end of a user-query, at a syncpoint (not sure if\n we have them up to now) and end of transaction.\n\n Things are getting much clearer - Tnx.\n\n>\n> Do you worry about disk space? -:)\n> With archive mode off only log segments (currently, 64M each)\n> required by active transactions (which made some changes)\n> will present on disk.\n\n Never - my motto is \"don't force it - use a bigger hammer\".\n But the above seems to be exactly like the Oracle behaviour\n where online-redolog's aren't affected by archive mode.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 23:06:44 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
}
] |
[
{
"msg_contents": "> Oh - well - vacuum shouldn't touch relations where\n> deferred triggers are outstanding. Might require some\n> special lock entry - Vadim?\n\nAll modified data will be in this same still open transaction.\nTherefore no relevant data can be removed by vacuum anyway.\n\nIt is my understanding, that the RI check is performed on the newest \navailable (committed) data (+ modified data from my own tx). \nE.g. a primary key that has been removed by another transaction after\nmy begin work will lead to an RI violation if referenced as foreign key.\n\nAndreas\n",
"msg_date": "Tue, 21 Sep 1999 11:24:09 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": ">\n> > Oh - well - vacuum shouldn't touch relations where\n> > deferred triggers are outstanding. Might require some\n> > special lock entry - Vadim?\n>\n> All modified data will be in this same still open transaction.\n> Therefore no relevant data can be removed by vacuum anyway.\n\n I expect this, but I really need to be sure that not even the\n location of the tuple in the heap will change. I need to find\n the tuples at the time the deferred triggers must be executed\n via heap_fetch() by their CTID!\n\n>\n> It is my understanding, that the RI check is performed on the newest\n> available (committed) data (+ modified data from my own tx).\n> E.g. a primary key that has been removed by another transaction after\n> my begin work will lead to an RI violation if referenced as foreign key.\n\n Absolutely right. The function that will fire the deferred\n triggers must switch to READ COMMITTED isolevel while doing\n so.\n\n What I'm not sure about is which snapshot to use to get the\n OLD tuples (outdated in this transaction by a previous\n command). Vadim?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 13:39:27 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> > It is my understanding, that the RI check is performed on the newest\n> > available (committed) data (+ modified data from my own tx).\n> > E.g. a primary key that has been removed by another transaction after\n> > my begin work will lead to an RI violation if referenced as foreign key.\n> \n> Absolutely right. The function that will fire the deferred\n> triggers must switch to READ COMMITTED isolevel while doing\n ^^^^^^^^^^^^^^\n> so.\n\nNO!\nWhat if one transaction deleted PK, another one inserted FK\nand now both performe RI check? Both transactions _must_\nuse DIRTY READs to notice that RI violated by another\nin-progress transaction and wait for concurrent transaction...\n\nBTW, using triggers to check _each_ modified tuple\n(i.e. run Executor for each modified tuple) is bad for\nperformance. We could implement direct support for\nstandard RI constraints.\n\nUsing rules (statement level triggers) for INSERT...SELECT,\nUPDATE and DELETE queries would be nice! Actually, RI constraint\nchecks need in very simple queries (i.e. without distinct etc)\nand the only we would have to do is\n\n> What I'm not sure about is which snapshot to use to get the\n> OLD tuples (outdated in this transaction by a previous\n> command). Vadim?\n\n1. Add CommandId to Snapshot.\n2. Use Snapshot->CommandId instead of global CurrentScanCommandId.\n3. Use Snapshots with different CommandId-s to get OLD/NEW\n versions.\n\nBut I agreed that the size of parsetrees may be big and for\nCOPY...FROM/INSERTs we should remember IDs of modified\ntuples. Well. Please remember that I implement WAL right\nnow, already have 1000 lines of code and hope to run first\ntests after writing additional ~200 lines -:)\nWe could read modified tuple IDs from WAL...\n\nVadim\n",
"msg_date": "Tue, 21 Sep 1999 22:33:20 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": ">\n> Jan Wieck wrote:\n> >\n> > > It is my understanding, that the RI check is performed on the newest\n> > > available (committed) data (+ modified data from my own tx).\n> > > E.g. a primary key that has been removed by another transaction after\n> > > my begin work will lead to an RI violation if referenced as foreign key.\n> >\n> > Absolutely right. The function that will fire the deferred\n> > triggers must switch to READ COMMITTED isolevel while doing\n> ^^^^^^^^^^^^^^\n> > so.\n>\n> NO!\n> What if one transaction deleted PK, another one inserted FK\n> and now both performe RI check? Both transactions _must_\n> use DIRTY READs to notice that RI violated by another\n> in-progress transaction and wait for concurrent transaction...\n\n Oh - I see - yes.\n\n>\n> BTW, using triggers to check _each_ modified tuple\n> (i.e. run Executor for each modified tuple) is bad for\n> performance. We could implement direct support for\n> standard RI constraints.\n\n As I want to implement it, there would be not much difference\n between a regular trigger invocation and a deferred one. If\n that causes a performance problem, I think we should speed up\n the trigger call mechanism in general instead of not using\n triggers.\n\n>\n> Using rules (statement level triggers) for INSERT...SELECT,\n> UPDATE and DELETE queries would be nice! Actually, RI constraint\n> checks need in very simple queries (i.e. without distinct etc)\n> and the only we would have to do is\n>\n> > What I'm not sure about is which snapshot to use to get the\n> > OLD tuples (outdated in this transaction by a previous\n> > command). Vadim?\n>\n> 1. Add CommandId to Snapshot.\n> 2. Use Snapshot->CommandId instead of global CurrentScanCommandId.\n> 3. Use Snapshots with different CommandId-s to get OLD/NEW\n> versions.\n>\n> But I agreed that the size of parsetrees may be big and for\n> COPY...FROM/INSERTs we should remember IDs of modified\n> tuples. Well. Please remember that I implement WAL right\n> now, already have 1000 lines of code and hope to run first\n> tests after writing additional ~200 lines -:)\n> We could read modified tuple IDs from WAL...\n\n Not only on COPY. One regular INSERT/UPDATE/DELETE statement\n can actually fire thousands of trigger calls right now. These\n triggers normally use SPI to execute their own queries. If\n such a trigger now uses a query that in turn causes a\n deferred constraint, we might have to save thousands of\n deferred querytrees - impossible mission.\n\n That's IMHO a clear drawback against using rules for\n deferrable RI.\n\n What I'm currently doing is clearly encapsulated in some\n functions in commands/trigger.c (except for some additional\n attributes in pg_trigger). If it later turns out that we can\n combine the information required into WAL, I think we have\n time enough to do so and shouldn't really care if v6.6\n doesn't have it already combined.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 16:55:33 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> > But I agreed that the size of parsetrees may be big and for\n> > COPY...FROM/INSERTs we should remember IDs of modified\n> > tuples. Well. Please remember that I implement WAL right\n> > now, already have 1000 lines of code and hope to run first\n> > tests after writing additional ~200 lines -:)\n> > We could read modified tuple IDs from WAL...\n> \n> Not only on COPY. One regular INSERT/UPDATE/DELETE statement\n> can actually fire thousands of trigger calls right now. These\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\nYes, because of we have not Statement Level Triggers (SLT).\nDeferred SLT would require us to remember _one_ parsertree for each\nstatement, just like deferred rules.\n\n> triggers normally use SPI to execute their own queries. If\n> such a trigger now uses a query that in turn causes a\n> deferred constraint, we might have to save thousands of\n ^^^^^^^^^^^^^^^^^^^^^\n> deferred querytrees - impossible mission.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nWhy should we save _thousands_ of querytrees in the case\nof row level trigger (I assume you mean one querytree for \neach modified tuple)?\nAs I described in prev letter - we have to remember just\nLastCommandIdProccessedByConstraint to stop fetching\ntuples from WAL.\n\nBTW, this is what sql3-12aug93 says about triggers and RI:\n\n22)If the <trigger event> specifies UPDATE, then let Ci be the i-th \n <column name> in the <trigger column list>.\n /* i.e UPDATE OF C1,..Cj */\n T shall not be the referencing table in any <referential \n constraint definition> that specifies ON UPDATE CASCADE, \n ON UPDATE SET NULL, ON UPDATE SET DEFAULT, ON DELETE SET NULL, \n or ON DELETE SET DEFAULT and contains a <reference column list> \n that includes Ci.\n\nInteresting?\n\nVadim\n",
"msg_date": "Wed, 22 Sep 1999 00:13:13 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
}
] |
[
{
"msg_contents": "Vadim wrote:\n> > Absolutely right. The function that will fire the deferred\n> > triggers must switch to READ COMMITTED isolevel while doing\n> ^^^^^^^^^^^^^^\n> > so.\n>\n> NO!\n> What if one transaction deleted PK, another one inserted FK\n> and now both performe RI check? Both transactions _must_\n> use DIRTY READs to notice that RI violated by another\n> in-progress transaction and wait for concurrent transaction...\n\nI think we need some kind of lock on the PK table row.\nThis is because a rollback must allways work. \n(If tx1 (insert PK) wants a rollback after tx2 did insert FK \nthis cannot throw a RI Violation) \n\nI don't think we can read dirty.\n\nWe have to wait for PK lock, and decide after tx1 commited/rolled back.\n\nOn timeout we decide as follows:\nEven if above tx1 (insert PK) is committed later, \nwe throw an error for tx2 (insert FK).\nAlso if a pk row is deleted/updated/inserted but not committed yet, \nwe ignore both old and new value for the FK check of tx2\nafter timeout and violate tx2.\n\nA lock mode wait 0 would be convenient here.\n\nEverything else imho leads to a violated integrity.\n\nAndreas\n",
"msg_date": "Tue, 21 Sep 1999 17:49:05 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n> \n> > What if one transaction deleted PK, another one inserted FK\n> > and now both performe RI check? Both transactions _must_\n> > use DIRTY READs to notice that RI violated by another\n> > in-progress transaction and wait for concurrent transaction...\n> \n> I think we need some kind of lock on the PK table row.\n\nModified tuples are locked by the fact that t_xmin/t_xmax\nis/are in-progress. DIRTY READ allows to see that tuples\nis being modified. I supposed to use some function to wait\nconcurrent transaction commit/abort.\n\n> This is because a rollback must allways work.\n> (If tx1 (insert PK) wants a rollback after tx2 did insert FK\n> this cannot throw a RI Violation)\n\ntx2 can't commit till tx1 is in-progress, right?\nMore of that, if tx2 is in serializable mode and there was\nno PK committed before tx2 began (i.e. visible to tx2 queries)\nthen this should result in tx2 abort, imho.\n\n> \n> I don't think we can read dirty.\n> \n> We have to wait for PK lock, and decide after tx1 commited/rolled back.\n\nYes, but select _never_ waits and READ COMMITTED never sees\nuncommitted concurrent changes. That's why I propose to use\nDIRTY READ (to see) and function (to wait).\n\nIf two tx-s will wait one another then one of them will be\naborted (deadlock condition) and another will continue to\nperform constraint check.\n\n> On timeout we decide as follows:\n\nWhy timeout should be used?\n\n> Even if above tx1 (insert PK) is committed later,\n> we throw an error for tx2 (insert FK).\n\nI'm not sure that we should abort tx2 running in READ COMMITTED\nmode.\n\n> Also if a pk row is deleted/updated/inserted but not committed yet,\n> we ignore both old and new value for the FK check of tx2\n> after timeout and violate tx2.\n\nWhy should we abort tx2 if pk is deleted but uncommitted yet?\nIn the case of tx1 (delete pk) abort we could let tx2 inserts\nfk. Why not?\n\n> A lock mode wait 0 would be convenient here.\n> \n> Everything else imho leads to a violated integrity.\n\nVadim\n",
"msg_date": "Wed, 22 Sep 1999 01:18:55 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Referential Integrity In PostgreSQL"
}
] |
[
{
"msg_contents": "Uh oh,\n\n I think deferred RI constraints must only fire the actions\n that remain after all commands during the entire transaction\n are condensed to the total minimum required to get that\n state, because deferred RI must only check what VISIBLY\n happened during the transaction.\n\n Thinking on the tuple level, a sequence of\n INSERT,UPDATE,UPDATE must fire only one INSERT trigger, but\n with the values of the last UPDATE. An UPDATE,DELETE sequence\n is in fact a DELETE of the original tuple and an\n INSERT,UPDATE,DELETE sequence is nothing.\n\n That means that the recording mechnism of the trigger events\n must be very smart on UPDATE and DELETE events, looking at\n the x_min of the old tuple if that resulted from the current\n transaction. If so, follow the events backward, disable\n previous ones and change the new event into what it really\n has to be.\n\n But some problems remain unsolvable by this:\n\n - PK has an ON DELETE CASCADE for FK\n - BEGIN\n - DELETE PK\n - INSERT same PK\n - COMMIT.\n\n This really shouldn't invoke the cascading delete, because at\n COMMIT the PK still is there. Same for a constraint that\n forbids deletion of a PK while referenced by FK. Therefore\n the deferred event recorder must check on INSERT any previous\n DELETES for the same relation if the key does match and drop\n both deferred triggers if so. Therefore it needs to know\n which attributes build the PK of that relation\n (<relname>_pkey guaranteed?).\n\n Well, I think that's finally the death of RI over rules. The\n code managing those rules during CREATE/ALTER TABLE would\n become totally unmaintainable. And (sorry Vadim) it's the\n death of SLT for this too because this event tracking must be\n done on the tuple level.\n\n It complicated the trigger approach too, but IMHO not too\n bad. Anyway, some co-developer(s) doing the parser- and\n utility-statement stuff (SET CONSTRAINTS ... etc.) would be\n great.\n\n Volunteers?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 21 Sep 1999 20:46:06 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "RI question"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Uh oh,\n\nWell, it's time for me to do some other work, so I'll\nreturn to discussion later.\n\nGood luck.\n\nVadim\n",
"msg_date": "Wed, 22 Sep 1999 03:14:38 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RI question"
}
] |
[
{
"msg_contents": "Hi All, \n\nSummary of the problem:\nI have another patch for a problem that I have run across, I describe\nthe problem, the solution I propose, and I have included the patch as\nan attachment.\n\nThe problem is that given two queries of the form\n\nSELECT <target list> FROM <range table> \n WHERE <var> <op> <function> (<const>);\n\nand\n\nSELECT <target list> FROM <range table> \n WHERE <var> <op> <const>;\n\nand a usable index on the table attribute corresponds to <var>,\nPostgreSQL will process the queries in different ways. Where the\nsecond query will use the index, the <function> in the first index\nwill fool the optimizer into missing the index even though it is\napplicable.\n\n\nExample:\n------------------------------------------------------------------------\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: bernie\n\n\nbernie=> \\i ../pgsql6_6/test.sql\nCREATE TABLE t1 (a1 int4);\nCREATE\n\nCREATE INDEX t1_idx ON t1 USING btree (a1);\nCREATE\n\nCREATE FUNCTION sqr (int4) RETURNS int4\n AS 'SELECT ($1)*($1)'\n LANGUAGE 'sql';\nCREATE\n\nINSERT INTO t1 VALUES (1);\nINSERT 188236 1\n\n \" \" \" \" \" \" \" \"\n\nINSERT INTO t1 VALUES (10);\nINSERT 188245 1\n\n-- Select with predicate of the form <var> <op> <const>\n\nSELECT * FROM t1 WHERE t1.a1=5;\na1\n--\n 5\n(1 row)\n\n\nEXPLAIN SELECT * FROM t1 WHERE t1.a1=5;\nNOTICE: QUERY PLAN:\n\nIndex Scan using t1_idx on t1 (cost=2.05 rows=1 width=4)\n\nEXPLAIN\n\n-- Select with predicate of the form <var> <op> <function> (<const>)\n\nSELECT * FROM t1 WHERE t1.a1 = sqr(2);\na1\n--\n 4\n(1 row)\n\n\nEXPLAIN SELECT * FROM t1 WHERE t1.a1 = sqr(2);\nNOTICE: QUERY PLAN:\n\nSeq Scan on t1 (cost=43.00 rows=100 width=4)\n\nEXPLAIN\nEOF\n\n-------------------------------------------------------------------------\nCause: \n\nThe cause of the problem is in the match_clause_to_indexkey() In\noptimizer/path/indxpath.c. The comment before the function says\n\n * To match, the clause:\n *\n * (1a) for a restriction clause: must be in the form (indexkey\nop const)\n * or (const op indexkey), or\n\nSo the routine that matches a restriction clause to an index is not\nsmart enough to realize that a function of constant arguments is\nreally a constant.\n\n\nSolution: \n\nThe solution that I propose is to include code in the optimizer that\npicks functions with constant arguments out of a qualification\nclause, and evaluates them. If sub-expression tree in a qual node\nonly consists of functions, operators, boolean nodes, and constants,\nthen it should evaluate to a constant node. It would be possible to\nscan for such subtrees in the parser without evaluating them, but it\nseems to me that doing the evaluation early is an optimization, and a\nsimplification of the planner and executor trees.\n\nI have implemented the solution by adding a tree mutator called\neval_const_expr_mutator() to optimizer/util/clauses.c. This mutator\ncalls ExecEvalExpr() from executor/execQual.c to do the actual\nevaluation. The ExpressionContext argument to ExecEvalExpr() is\nassigned a null pointer. This hack works, because the tree that is\nbeing evaluated contains only constant nodes, The only code called by\nExecEvalExpr() that needs the econtext is code that resolves parameter\nand variable nodes. The eval_const_expr_mutator() uses the fields in\nthe fcache structure that ExecEvalExpr() creates to construct the \nConst node that it returns.\n\nI don't know how you all feel about mixing code from the executor and\nthe planner in this way, but it if you accept early evaluation of\nconstant functions in the planner, then there has to be some common\nfunctionality between the two sections. I would be happy to hear\nsuggestions for a better way to abstract the interface to the\nevaluation code so that the executor and planner see a common neutral\ninterface.\n\nFinally, there is the question of where in the planner should the\nearly evaluation occur. It is not obvious to me where the best point\nis, I chose to put it in\nplan/initsplan.c:add_restrict_and_join_to_rels(). The function\nadd_restrict_and_join_to_rels() loops through the list of qual\nclauses, and adds the join and restriction information to the\nRelOptInfo nodes for the realtions that participate in the qual clauses.\n\nThe function becomes:\n\nvoid\nadd_restrict_and_join_to_rels(Query *root, List *clauses)\n{\n List *clause;\n\n foreach(clause, clauses)\n {\n clause = eval_const_expr_in_quals(clause)\n add_restrict_and_join_to_rel(root, (Node*)\nlfirst(clause));\n }\n}\n\nThis choice means that evaluation is performed right before the call\nto make_one_rel() in planmain.c:subplanner(). \n\nResults:\n\nWith the patch the second SELECT statement in the example becomes\n\n------------------------------------------------------------------------\nbernie=> SELECT * FROM t1 WHERE t1.a1 = sqr(2);\na1\n--\n 4\n(1 row)\n\nbernie=> \nbernie=> EXPLAIN SELECT * FROM t1 WHERE t1.a1 = sqr(2);\nNOTICE: QUERY PLAN:\n\nIndex Scan using t1_idx on t1 (cost=2.50 rows=10 width=4)\n\nEXPLAIN\n------------------------------------------------------------------------\n\nThat's a long explanation for a small patch, but hacking this stuff is\na little like walking on hot stones --- you want to be sure that you\nare doing it right before you get burnt.\n\nBernie Frankpitt\n\n\n-------------------------------------------------------------------------\n\nPatch attached:\n\nNote: I pgindent'ed both the .orig and the new files before making the \npatch",
"msg_date": "Tue, 21 Sep 1999 20:18:00 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Early evaluation of constant expresions (with PATCH)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bernard Frankpitt <[email protected]> writes:\n> > The solution that I propose is to include code in the optimizer that\n> > picks functions with constant arguments out of a qualification\n> > clause, and evaluates them.\n> \n> This is something I had on my own to-do list, and I'm glad to see\n> someone beat me to it. But you've only done half the job: you\n> should also be folding operators with constant arguments.\n> \n> Also, you need to be wary of functions like now() and random().\n> There probably isn't any other way to handle these than to add a\n> column to pg_proc flagging functions that can't be constant-folded.\n>\n\nI actually do the operators as well, and also boolean operators (which\nare handled by special Expr nodes).\n\nI puzzled over case of now() for a while but I don't think that it\nraises a problem.\n\nFor queries like\n\nSELECT * FROM t WHERE t.a < now();\n\nEarly evaluation seems quite reasonable. Now means a fixed time close to \nthe time the backend received the query. It seems to me that all the\nnow() calls in a query should return values pretty close to each other.\nA query like\n\nSELECT * FROM t1 t2 WHERE t1.a < now() AND t2.a < now();\n\nwill have two values of now that are very close, since the evaluations\nboth happen in the planner. People who expect the two now() calls to\ngive exactly the same value in this case are expecting too much, queries\nlike this should be rewritten\n\nSELECT * FROM t1 t2 WHERE t1.a < now() AND t1.a = t2.a;\n\nIn fact istm that the correct way to handle now() would be to have a\nvalue that is constant over a transation, and comensurate with the\nnumbering of tids.\n\nI don't think that random() is a problem at all. It gets called once\neach time it is written in the query string. That is certainly a\nreasonable interpretation of its meaning. \n\n> > is, I chose to put it in\n> > plan/initsplan.c:add_restrict_and_join_to_rels().\n> \n> I believe it would be best to do it considerably earlier, specifically,\n> before cnfify(). It might even be worth running the code twice,\n> once before and once after cnfify.\n> \n> Also, probably we should apply it to the targetlist as well as the qual.\n> \n\nYes, close to cnfify might be a better place. I only did it for the\nquals because I don't think I understand the other parts of the plan\ntrees well enough. The function is quite easy to use though, it acts as\na filter on connected subtrees that consist of List nodes and all Expr\nnodes other than\nSUBPLAN_EXPR nodes. Because of the recursive way that qual plans are\nbuilt, subplans are still optimized.\n\nAnother factor about positioning of the filter that I was uncertain\nabout was time expense. Is the time taken by multiple tree walks in the\nplanner\nvery significant in the overall scheme of things?\n\nBernie\n",
"msg_date": "Tue, 21 Sep 1999 19:06:53 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH)"
},
{
"msg_contents": "Bernard Frankpitt <[email protected]> writes:\n> The solution that I propose is to include code in the optimizer that\n> picks functions with constant arguments out of a qualification\n> clause, and evaluates them.\n\nThis is something I had on my own to-do list, and I'm glad to see\nsomeone beat me to it. But you've only done half the job: you\nshould also be folding operators with constant arguments.\n\nAlso, you need to be wary of functions like now() and random().\nThere probably isn't any other way to handle these than to add a\ncolumn to pg_proc flagging functions that can't be constant-folded.\n\n> Finally, there is the question of where in the planner should the\n> early evaluation occur. It is not obvious to me where the best point\n> is, I chose to put it in\n> plan/initsplan.c:add_restrict_and_join_to_rels().\n\nI believe it would be best to do it considerably earlier, specifically,\nbefore cnfify(). It might even be worth running the code twice,\nonce before and once after cnfify.\n\nAlso, probably we should apply it to the targetlist as well as the qual.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 1999 21:34:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH) "
},
{
"msg_contents": "> Bernard Frankpitt <[email protected]> writes:\n> > The solution that I propose is to include code in the optimizer that\n> > picks functions with constant arguments out of a qualification\n> > clause, and evaluates them.\n> \n> This is something I had on my own to-do list, and I'm glad to see\n> someone beat me to it. But you've only done half the job: you\n> should also be folding operators with constant arguments.\n> \n> Also, you need to be wary of functions like now() and random().\n> There probably isn't any other way to handle these than to add a\n> column to pg_proc flagging functions that can't be constant-folded.\n\nAlready there, pg_proc.proiscachable.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 22:17:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH)"
},
{
"msg_contents": "[email protected] writes:\n> Tom Lane wrote:\n>> This is something I had on my own to-do list, and I'm glad to see\n>> someone beat me to it. But you've only done half the job: you\n>> should also be folding operators with constant arguments.\n\n> I actually do the operators as well, and also boolean operators (which\n> are handled by special Expr nodes).\n\n(Hangs head...) Yup. That's what I get for opining after a fast\nlate-night scan of a patch. Relying on ExecEvalExpr is a good hack ---\nthe patch is much smaller than I would've guessed. (Actually, now\nthat I look at it, it looks like the functions rather than the\noperators are missing the necessary preinitialization. Perhaps at\nthe place where you chose to put this in, setFcache has already\nbeen done?)\n\nThere are additional smarts that could/should be put in, though.\nIn particular, I think we should be smarter about AND and OR clauses.\nIf *any* of the inputs to an AND are a constant FALSE, you can collapse\nthe node and not bother computing the other subexpressions; likewise\na constant TRUE input to an OR allows short-circuiting. (I have seen\nqueries, primarily machine-generated ones, where this would be an\nenormous win.) Contrariwise, constant TRUE/FALSE inputs can simply be\ndropped, and the AND or OR operator eliminated if only one nonconstant\ninput remains. This is the reason why I think there is an interaction\nwith cnfify(): it rearranges the AND/OR structure of the tree and might\nexpose --- or hide --- opportunities of this kind. (BTW, it might be a\ngood idea to do the first pass of cnfify, namely AND/OR flattening,\nbefore trying to apply this simplification.)\n\nAlso, most operators and functions can be collapsed to NULL if any of\ntheir inputs are NULL, although I don't think we can risk making that\noptimization without adding a flag to pg_proc that tells us if it is OK.\n\n>> Also, you need to be wary of functions like now() and random().\n>> There probably isn't any other way to handle these than to add a\n>> column to pg_proc flagging functions that can't be constant-folded.\n\n> I puzzled over case of now() for a while but I don't think that it\n> raises a problem.\n\nNo, you can't just define the problem away by saying that whatever\nbehavior is convenient to implement is acceptable. It's true that\nnow() is not really a problem, because it's defined to yield the\nstart time of the current transaction, and therefore is effectively\na constant *within any one transaction*. But random() *is* a problem,\nand user-defined functions could be a problem. SQL functions probably\nshouldn't be folded either (not quite sure of that).\n\nBruce points out in another reply that the proiscachable field of\npg_proc is intended for exactly this purpose. It hasn't been kept\nup carefully because no extant code uses it, and in fact hardly any\nof the standard entries in pg_proc are marked iscachable, which is\nobviously silly. But we could certainly go through pg_proc and set\nthe flag on everything except the danger items.\n\n> Another factor about positioning of the filter that I was uncertain\n> about was time expense. Is the time taken by multiple tree walks in\n> the planner very significant in the overall scheme of things?\n\nI don't think you need to worry about anything that has cost linear in\nthe size of the expression tree. Up till a couple weeks ago we had some\ncode in cnfify() that used space and time exponential in the size of the\ntree :-( ... now it's down to O(N^2) which is still a bottleneck for\ncomplex query expressions, but O(N) is not to be worried about. Like\nI said, I wouldn't object to running this code twice on a qual.\n\nThere are some upstream places where it would be nice too --- for\nexample, coercion of DEFAULT expressions would be best handled by\nsticking a type-conversion function atop the given parsetree and then\nseeing if this code would simplify it. We'll definitely need to make\nuse of proiscachable to make that safe, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 10:26:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH) "
},
{
"msg_contents": "You da man Bernard! I've been wanting to do this for a while also, but\nhadn't taken the time to puzzle through it.\n\n> In fact istm that the correct way to handle now() would be to have a\n> value that is constant over a transation, and comensurate with the\n> numbering of tids.\n\nThat is the current behavior of now(); Postgres has a time function\nwhich returns the time of the current transaction and now() (and every\nother form of date/time 'now') uses that.\n\nWhen doing this pre-evaluation in the parse tree, is it possible that\nthe transaction time is not yet set? So perhaps 'now' and now() would\nhave problems here. Remember that \"datetime 'now'\" also resembles a\nconstant but has the same behavior as \"now()\", so can't really be\nconsidered a true constant either.\n\n> I don't think that random() is a problem at all. It gets called once\n> each time it is written in the query string. That is certainly a\n> reasonable interpretation of its meaning.\n\nIf we use the \"is cachable\" flag for procedures (*and* for constants\non types!) then it would be possible to have random() behave as\nexpected- returning unique values for every invocation and unique\nvalues into each field of a single-line query/insert.\n\nI hope we haven't put you off here, and I'd suggest that we\nincorporate your patches as they stand now, then work on the next step\nlater (assuming that it isn't something you have time or interest\nfor). But if you can see doing this next step already, we'd love to\nhave it included in the first patch. What do you think?\n\nThanks for the work. This is a neat area to get improvements for...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 22 Sep 1999 15:19:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> .... (Actually, now\n> that I look at it, it looks like the functions rather than the\n> operators are missing the necessary preinitialization. Perhaps at\n> the place where you chose to put this in, setFcache has already\n> been done?)\n>\n\nThe functions work because the funcid field in the Func node is already\nfilled in, and the EvalQual code uses this field to generate the Fcache.\nIn the case of Oper node there are two fields, one for the pg_operator\nOid,\nand one for the pg_proc Oid. The pg_operator oid is already filled in,\nbut the pg_proc oid isn't. The EvalQual code wants the pg_proc oid, so\nI provide it in the patch before I do the evaluation.\n\n> \n> There are additional smarts that could/should be put in, though. ....\n> \n> { Many good suggestions here } \n>\n> .... without adding a flag to pg_proc that tells us if it is OK.\n> \n\nAll points well taken. I don't have time to do this thoroughly right\nnow,\nbut I will get back to it. My original ( needed-for-project-at-hand )\nmotivation for this was to get index scans to work with expressions that\nevaluate to constants. I can see that I am about to learn quite a bit\nmore about parsing and planning. \n\n> > I puzzled over case of now() for a while but I don't think that it\n> > raises a problem.\n> \n> No, you can't just define the problem away by saying that whatever\n> behavior is convenient to implement is acceptable.\n\nOh darn! -- I've spent too many years studying mathematics\n\n> and user-defined functions could be a problem. SQL functions probably\n> shouldn't be folded either (not quite sure of that).\n> \n> Bruce points out in another reply that the proiscachable field of\n> pg_proc is intended for exactly this purpose. \n\nPerhaps adding another option to create function is in order here. I\nknow how to do that already. Seriously, there are some interesting\nsemantic issues here, especially if the database were being used as the\nbasis for a large dynamic stochastic model.\n\n\nBernie\n",
"msg_date": "Wed, 22 Sep 1999 15:34:32 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH)"
},
{
"msg_contents": "Bernard Frankpitt <[email protected]> writes:\n>> .... (Actually, now\n>> that I look at it, it looks like the functions rather than the\n>> operators are missing the necessary preinitialization. Perhaps at\n>> the place where you chose to put this in, setFcache has already\n>> been done?)\n\n> The functions work because the funcid field in the Func node is already\n> filled in, and the EvalQual code uses this field to generate the\n> Fcache.\n\nOh, OK. Cool. For some reason I was thinking that the planner\nwas supposed to generate the fcache entry somewhere along the line.\n\n> All points well taken. I don't have time to do this thoroughly right\n> now, but I will get back to it.\n\nOK, or I will work on it if I get to it before you do. As Thomas\nremarked, you've provided a great starting point --- thanks!\n\nI know I already have one patch from you that I promised to integrate,\nbut I'm up to my ass in buffer refcount bugs :-(. As soon as I can come\nup for air, I will stick in this code, though I think I will call it\nfrom somewhere near cnfify per prior discussion.\n\n>> Bruce points out in another reply that the proiscachable field of\n>> pg_proc is intended for exactly this purpose. \n\n> Perhaps adding another option to create function is in order here.\n\nActually, create function *has* an iscachable option which sets that\nfield. According to glimpse, it's the only code in the system that\nknows the field exists :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 11:35:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Early evaluation of constant expresions (with PATCH) "
}
] |
[
{
"msg_contents": "\nI tray create new function bye read from file.\n\nWhen I read this file I have errors\n\npgReadData()- backend closed the channel unexpectedly\n\nin may log file :\nFatal 1 : btree: cannot split if start (2) >= maxoff (2)\n\nor somethings like this:\nfatal 1: my bits moved right off the end of the world!\n\nPostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.31\non Debian Slink\n\nNeed help\n\n\nBest regards\n\n\n",
"msg_date": "Wed, 22 Sep 1999 09:36:45 GMT",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with new function"
}
] |
[
{
"msg_contents": "\nI've got a big problem: I had an operator defined as follows:\n\nCREATE OPERATOR ^ (\n leftarg = bit1,\n rightarg = bit1,\n procedure = bit1xor\n);\n\nand this was fine until 6.5.1, but in 6.5.2 I get\n\nERROR: parser: parse error at or near \"^\"\n\n\nI've got the same problem with \n\nCREATE OPERATOR | (\n leftarg = bit1,\n rightarg = bit1,\n procedure = bit1or,\n commutator = |\n); \n\nbut at least that didn't work under 6.5.1 either. Can anybody give me a\nhint how to fix this? I know nothing about the parser, or lex or yacc so\nI don't even know where to start. I need to fix this rather urgently ,\nas I have tons of plpgsql functions that make use of the ^ operator. At\nthe moment I get a power for all of these which leads to pretty\ndisastrous consequences :-(.\n\nIf anybody can give me any hint at all, I'll have a go at fixing it.\nMuch appreciated!\n\nAdriaan\n",
"msg_date": "Wed, 22 Sep 1999 16:38:31 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Operator definitions"
},
{
"msg_contents": "> \n> I've got a big problem: I had an operator defined as follows:\n> \n> CREATE OPERATOR ^ (\n> leftarg = bit1,\n> rightarg = bit1,\n> procedure = bit1xor\n> );\n> \n> and this was fine until 6.5.1, but in 6.5.2 I get\n> \n> ERROR: parser: parse error at or near \"^\"\n> \n> \n> I've got the same problem with \n> \n> CREATE OPERATOR | (\n> leftarg = bit1,\n> rightarg = bit1,\n> procedure = bit1or,\n> commutator = |\n> ); \n> \n> but at least that didn't work under 6.5.1 either. Can anybody give me a\n> hint how to fix this? I know nothing about the parser, or lex or yacc so\n> I don't even know where to start. I need to fix this rather urgently ,\n> as I have tons of plpgsql functions that make use of the ^ operator. At\n> the moment I get a power for all of these which leads to pretty\n> disastrous consequences :-(.\n> \n> If anybody can give me any hint at all, I'll have a go at fixing it.\n> Much appreciated!\n\nWe do special things for ^ and | so it has proper precedence for\n\n\tselect 2 ^ 1*2 \n\tselect 'asdf' | 'asdf' | 'asdf'\n\nHowever, there were no changes I know of in 6.5.2 that would cause it to\nbreak.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Sep 1999 12:20:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> I've got a big problem: I had an operator defined as follows:\n> CREATE OPERATOR ^ (\n> leftarg = bit1,\n> rightarg = bit1,\n> procedure = bit1xor\n> );\n> and this was fine until 6.5.1, but in 6.5.2 I get\n> ERROR: parser: parse error at or near \"^\"\n\nIt looks to me like '^' and '|' have been left out of the alternatives\nfor MathOp in src/backend/parser/gram.y. It could be they were\ndeliberately omitted, but I bet it's just an oversight.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 16:56:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions "
},
{
"msg_contents": "> It looks to me like '^' and '|' have been left out of the alternatives\n> for MathOp in src/backend/parser/gram.y. It could be they were\n> deliberately omitted, but I bet it's just an oversight.\n\nOK, here is a patch to allow both ^ and | as operators, both in operator\ndefinitions and expressions. It seems to work for me. Unfortunately the\nregression tests do not tell me an awful lot, as several of them fail on\nthe Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\nit if somebody else could check the patch out and let me know whether it\nis ok.\n\nCheers,\n\nAdriaan",
"msg_date": "Thu, 23 Sep 1999 10:45:09 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> > It looks to me like '^' and '|' have been left out of the alternatives\n> > for MathOp in src/backend/parser/gram.y. It could be they were\n> > deliberately omitted, but I bet it's just an oversight.\n> OK, here is a patch to allow both ^ and | as operators, both in operator\n> definitions and expressions. It seems to work for me. Unfortunately the\n> regression tests do not tell me an awful lot, as several of them fail on\n> the Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\n> it if somebody else could check the patch out and let me know whether it\n> is ok.\n\nIt's fine as far as it goes, but istm that there are other cases\nmissing from gram.y also :(\n\nBruce, how did this stuff get into the stable release? Looks like we\nare going to need a v6.5.3 Real Soon Now. And packagers, we should\nplan on having a patch for v6.5.2. I'll try coming up with one in the\nnext couple of days; I've tested on my own gram.y but it has too many\nother changes to be used as-is.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 23 Sep 1999 13:32:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "Adriaan Joubert <[email protected]> writes:\n> OK, here is a patch to allow both ^ and | as operators, both in operator\n> definitions and expressions. It seems to work for me. Unfortunately the\n> regression tests do not tell me an awful lot, as several of them fail on\n> the Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\n> it if somebody else could check the patch out and let me know whether it\n> is ok.\n\nIf you search for, eg, '%', you will find there are several production\nlists that call out all the operators; your patch only caught one of them.\n\nThis is a real pain in the neck to maintain, but AFAIK we couldn't\ncollapse the productions into a single one using MathOp without losing\noperator precedence info :-(\n\nIt might be helpful if gram.y had annotations like \"# Here be MathOps\"\nso that you could search for the darn things and make sure you had\nadjusted each and every production list whenever you added/deleted one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 10:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Adriaan Joubert <[email protected]> writes:\n> > OK, here is a patch to allow both ^ and | as operators, both in operator\n> > definitions and expressions. It seems to work for me. Unfortunately the\n> > regression tests do not tell me an awful lot, as several of them fail on\n> > the Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\n> > it if somebody else could check the patch out and let me know whether it\n> > is ok.\n> \n> If you search for, eg, '%', you will find there are several production\n> lists that call out all the operators; your patch only caught one of them.\n> \n\nWell, as I said, I don't really understand what is going on in that file\nand I added the minimum to make my stuff work. Thomas said he was going\nto have a look at it, so I think I'll rely on him fixing it, before I\nbreak anything else ;-). \n\nI've already noticed that I will need | as an operator for the SQL bit\ntypes, so I may have to hack it a bit more. I just hate changing things\nblindly I don't really understand. Thanks for pointing it out though!\n\nAdriaan\n",
"msg_date": "Thu, 23 Sep 1999 17:56:56 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> > > It looks to me like '^' and '|' have been left out of the alternatives\n> > > for MathOp in src/backend/parser/gram.y. It could be they were\n> > > deliberately omitted, but I bet it's just an oversight.\n> > OK, here is a patch to allow both ^ and | as operators, both in operator\n> > definitions and expressions. It seems to work for me. Unfortunately the\n> > regression tests do not tell me an awful lot, as several of them fail on\n> > the Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\n> > it if somebody else could check the patch out and let me know whether it\n> > is ok.\n> \n> It's fine as far as it goes, but istm that there are other cases\n> missing from gram.y also :(\n> \n> Bruce, how did this stuff get into the stable release? Looks like we\n> are going to need a v6.5.3 Real Soon Now. And packagers, we should\n> plan on having a patch for v6.5.2. I'll try coming up with one in the\n> next couple of days; I've tested on my own gram.y but it has too many\n> other changes to be used as-is.\n\nI have no idea how this got in. Looking at the cvs logs, I don't see\nanything since 6.5 that would cause this to break in 6.5.2. Even\nlooking at an actual diff against 6.5.1, I don't see anything.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 11:39:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> Adriaan Joubert <[email protected]> writes:\n> > OK, here is a patch to allow both ^ and | as operators, both in operator\n> > definitions and expressions. It seems to work for me. Unfortunately the\n> > regression tests do not tell me an awful lot, as several of them fail on\n> > the Alpha anyway. As I don;t really know what I'm doing, I'd appreciate\n> > it if somebody else could check the patch out and let me know whether it\n> > is ok.\n> \n> If you search for, eg, '%', you will find there are several production\n> lists that call out all the operators; your patch only caught one of them.\n> \n> This is a real pain in the neck to maintain, but AFAIK we couldn't\n> collapse the productions into a single one using MathOp without losing\n> operator precedence info :-(\n> \n> It might be helpful if gram.y had annotations like \"# Here be MathOps\"\n> so that you could search for the darn things and make sure you had\n> adjusted each and every production list whenever you added/deleted one.\n\nOK, I have applied a patch to fix all the operator cases for ^ and |. \nThis will be in 6.6.\n\nThe issue is that we want to specify precedence for the common math\noperators, and I needed to be able to specify precedence for '|' so people\ncould do SELECT 'A' | 'B' | 'C'.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 10:31:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> OK, I have applied a patch to fix all the operator cases for ^ and |.\n> This will be in 6.6.\n> The issue is that we want to specify precedence for the common math\n> operators, and I needed to be able to specify precedence for '|' so people\n> could do SELECT 'A' | 'B' | 'C'.\n\nI had already posted and applied a patch for the stable branch, since\nv6.5.2 was damaged wrt v6.5 functionality. The patch will also appear\nin RedHat's rpms for their RH6.1 release. I hadn't yet applied the\npatch to the main branch, but have it in my gram.y code where I'm\nworking on join syntax.\n\nCan you compare your patch of the main branch with the very recent\nchanges on the stable branch?\n\nDarn, back to cvs merge hell...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 28 Sep 1999 15:05:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> > OK, I have applied a patch to fix all the operator cases for ^ and |.\n> > This will be in 6.6.\n> > The issue is that we want to specify precedence for the common math\n> > operators, and I needed to be able to specify precedence for '|' so people\n> > could do SELECT 'A' | 'B' | 'C'.\n> \n> I had already posted and applied a patch for the stable branch, since\n> v6.5.2 was damaged wrt v6.5 functionality. The patch will also appear\n> in RedHat's rpms for their RH6.1 release. I hadn't yet applied the\n> patch to the main branch, but have it in my gram.y code where I'm\n> working on join syntax.\n> \n> Can you compare your patch of the main branch with the very recent\n> changes on the stable branch?\n> \n> Darn, back to cvs merge hell...\n> \n\nMan, there are tons of changes between the two.\n\nHere are the changes I made. I can easily back this out, and re-add\nafter you are done.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: gram.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.100\nretrieving revision 2.103\ndiff -c -r2.100 -r2.103\n*** gram.y\t1999/09/28 04:34:44\t2.100\n--- gram.y\t1999/09/28 14:49:36\t2.103\n***************\n*** 10,16 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v 2.100 1999/09/28 04:34:44 momjian Exp $\n *\n * HISTORY\n *\t AUTHOR\t\t\tDATE\t\t\tMAJOR EVENT\n--- 10,16 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v 2.103 1999/09/28 14:49:36 momjian Exp $\n *\n * HISTORY\n *\t AUTHOR\t\t\tDATE\t\t\tMAJOR EVENT\n***************\n*** 977,982 ****\n--- 977,984 ----\n \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"*\"), $3)); }\n \t\t\t| default_expr '^' default_expr\n \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"^\"), $3)); }\n+ \t\t\t| default_expr '|' default_expr\n+ \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"|\"), $3)); }\n \t\t\t| default_expr '=' default_expr\n \t\t\t\t{\telog(ERROR,\"boolean expressions not supported in DEFAULT\"); }\n \t\t\t| default_expr '<' default_expr\n***************\n*** 1127,1132 ****\n--- 1129,1136 ----\n \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"*\"), $3)); }\n \t\t\t| constraint_expr '^' constraint_expr\n \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"^\"), $3)); }\n+ \t\t\t| constraint_expr '|' constraint_expr\n+ \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"|\"), $3)); }\n \t\t\t| constraint_expr '=' constraint_expr\n \t\t\t\t{\t$$ = nconc( $1, lcons( makeString( \"=\"), $3)); }\n \t\t\t| constraint_expr '<' constraint_expr\n***************\n*** 2042,2047 ****\n--- 2046,2053 ----\n \t\t| '*'\t\t\t{ $$ = \"*\"; }\n \t\t| '/'\t\t\t{ $$ = \"/\"; }\n \t\t| '%'\t\t\t{ $$ = \"%\"; }\n+ \t\t| '^'\t\t\t{ $$ = \"^\"; }\n+ \t\t| '|'\t\t\t{ $$ = \"|\"; }\n \t\t| '<'\t\t\t{ $$ = \"<\"; }\n \t\t| '>'\t\t\t{ $$ = \">\"; }\n \t\t| '='\t\t\t{ $$ = \"=\"; }\n***************\n*** 3638,3643 ****\n--- 3644,3651 ----\n \t\t| '*'\t\t\t\t\t\t\t\t{ $$ = \"*\"; }\n \t\t| '/'\t\t\t\t\t\t\t\t{ $$ = \"/\"; }\n \t\t| '%'\t\t\t\t\t\t\t\t{ $$ = \"%\"; }\n+ \t\t| '^'\t\t\t\t\t\t\t\t{ $$ = \"^\"; }\n+ \t\t| '|'\t\t\t\t\t\t\t\t{ $$ = \"|\"; }\n \t\t;\n \n sub_type: ANY\t\t\t\t\t\t\t\t{ $$ = ANY_SUBLINK; }\n***************\n*** 3672,3693 ****\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n \t\t| '^' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| a_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n \t\t| a_expr '^'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, NULL); }\n \t\t| a_expr '+' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", $1, $3); }\n \t\t| a_expr '-' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"-\", $1, $3); }\n \t\t| a_expr '/' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"/\", $1, $3); }\n \t\t| a_expr '%' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, $3); }\n- \t\t| a_expr '*' a_expr\n- \t\t\t\t{\t$$ = makeA_Expr(OP, \"*\", $1, $3); }\n \t\t| a_expr '^' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, $3); }\n \t\t| a_expr '<' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"<\", $1, $3); }\n \t\t| a_expr '>' a_expr\n--- 3680,3711 ----\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n \t\t| '^' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n+ \t\t| '|' a_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", NULL, $2); }\n+ \t\t| ':' a_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \":\", NULL, $2); }\n+ \t\t| ';' a_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \";\", NULL, $2); }\n \t\t| a_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n \t\t| a_expr '^'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, NULL); }\n+ \t\t| a_expr '|'\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", $1, NULL); }\n \t\t| a_expr '+' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", $1, $3); }\n \t\t| a_expr '-' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"-\", $1, $3); }\n+ \t\t| a_expr '*' a_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"*\", $1, $3); }\n \t\t| a_expr '/' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"/\", $1, $3); }\n \t\t| a_expr '%' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, $3); }\n \t\t| a_expr '^' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, $3); }\n+ \t\t| a_expr '|' a_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", $1, $3); }\n \t\t| a_expr '<' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"<\", $1, $3); }\n \t\t| a_expr '>' a_expr\n***************\n*** 3701,3712 ****\n \n \t\t| a_expr '=' a_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"=\", $1, $3); }\n- \t\t| ':' a_expr\n- \t\t\t\t{\t$$ = makeA_Expr(OP, \":\", NULL, $2); }\n- \t\t| ';' a_expr\n- \t\t\t\t{\t$$ = makeA_Expr(OP, \";\", NULL, $2); }\n- \t\t| '|' a_expr\n- \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", NULL, $2); }\n \t\t| a_expr TYPECAST Typename\n \t\t\t\t{\n \t\t\t\t\t$$ = (Node *)$1;\n--- 3719,3724 ----\n***************\n*** 4089,4094 ****\n--- 4101,4116 ----\n \t\t\t\t\tn->subselect = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n+ \t\t| a_expr '*' '(' SubSelect ')'\n+ \t\t\t\t{\n+ \t\t\t\t\tSubLink *n = makeNode(SubLink);\n+ \t\t\t\t\tn->lefthand = lcons($1, NULL);\n+ \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n+ \t\t\t\t\tn->useor = false;\n+ \t\t\t\t\tn->subLinkType = EXPR_SUBLINK;\n+ \t\t\t\t\tn->subselect = $4;\n+ \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t}\n \t\t| a_expr '/' '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n***************\n*** 4109,4124 ****\n \t\t\t\t\tn->subselect = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '*' '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1, NULL);\n! \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = EXPR_SUBLINK;\n \t\t\t\t\tn->subselect = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n \t\t| a_expr '<' '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n--- 4131,4156 ----\n \t\t\t\t\tn->subselect = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '^' '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1, NULL);\n! \t\t\t\t\tn->oper = lcons(\"^\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = EXPR_SUBLINK;\n \t\t\t\t\tn->subselect = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n+ \t\t| a_expr '|' '(' SubSelect ')'\n+ \t\t\t\t{\n+ \t\t\t\t\tSubLink *n = makeNode(SubLink);\n+ \t\t\t\t\tn->lefthand = lcons($1, NULL);\n+ \t\t\t\t\tn->oper = lcons(\"|\",NIL);\n+ \t\t\t\t\tn->useor = false;\n+ \t\t\t\t\tn->subLinkType = EXPR_SUBLINK;\n+ \t\t\t\t\tn->subselect = $4;\n+ \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t}\n \t\t| a_expr '<' '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n***************\n*** 4179,4184 ****\n--- 4211,4226 ----\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n+ \t\t| a_expr '*' ANY '(' SubSelect ')'\n+ \t\t\t\t{\n+ \t\t\t\t\tSubLink *n = makeNode(SubLink);\n+ \t\t\t\t\tn->lefthand = lcons($1,NIL);\n+ \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n+ \t\t\t\t\tn->useor = false;\n+ \t\t\t\t\tn->subLinkType = ANY_SUBLINK;\n+ \t\t\t\t\tn->subselect = $5;\n+ \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t}\n \t\t| a_expr '/' ANY '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n***************\n*** 4199,4209 ****\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '*' ANY '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1,NIL);\n! \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = ANY_SUBLINK;\n \t\t\t\t\tn->subselect = $5;\n--- 4241,4261 ----\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '^' ANY '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1,NIL);\n! \t\t\t\t\tn->oper = lcons(\"^\",NIL);\n! \t\t\t\t\tn->useor = false;\n! \t\t\t\t\tn->subLinkType = ANY_SUBLINK;\n! \t\t\t\t\tn->subselect = $5;\n! \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t}\n! \t\t| a_expr '|' ANY '(' SubSelect ')'\n! \t\t\t\t{\n! \t\t\t\t\tSubLink *n = makeNode(SubLink);\n! \t\t\t\t\tn->lefthand = lcons($1,NIL);\n! \t\t\t\t\tn->oper = lcons(\"|\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = ANY_SUBLINK;\n \t\t\t\t\tn->subselect = $5;\n***************\n*** 4269,4274 ****\n--- 4321,4336 ----\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n+ \t\t| a_expr '*' ALL '(' SubSelect ')'\n+ \t\t\t\t{\n+ \t\t\t\t\tSubLink *n = makeNode(SubLink);\n+ \t\t\t\t\tn->lefthand = lcons($1, NULL);\n+ \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n+ \t\t\t\t\tn->useor = false;\n+ \t\t\t\t\tn->subLinkType = ALL_SUBLINK;\n+ \t\t\t\t\tn->subselect = $5;\n+ \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t}\n \t\t| a_expr '/' ALL '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n***************\n*** 4289,4299 ****\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '*' ALL '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1, NULL);\n! \t\t\t\t\tn->oper = lcons(\"*\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = ALL_SUBLINK;\n \t\t\t\t\tn->subselect = $5;\n--- 4351,4371 ----\n \t\t\t\t\tn->subselect = $5;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n! \t\t| a_expr '^' ALL '(' SubSelect ')'\n \t\t\t\t{\n \t\t\t\t\tSubLink *n = makeNode(SubLink);\n \t\t\t\t\tn->lefthand = lcons($1, NULL);\n! \t\t\t\t\tn->oper = lcons(\"^\",NIL);\n! \t\t\t\t\tn->useor = false;\n! \t\t\t\t\tn->subLinkType = ALL_SUBLINK;\n! \t\t\t\t\tn->subselect = $5;\n! \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t}\n! \t\t| a_expr '|' ALL '(' SubSelect ')'\n! \t\t\t\t{\n! \t\t\t\t\tSubLink *n = makeNode(SubLink);\n! \t\t\t\t\tn->lefthand = lcons($1, NULL);\n! \t\t\t\t\tn->oper = lcons(\"|\",NIL);\n \t\t\t\t\tn->useor = false;\n \t\t\t\t\tn->subLinkType = ALL_SUBLINK;\n \t\t\t\t\tn->subselect = $5;\n***************\n*** 4363,4390 ****\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n \t\t| '^' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n \t\t| b_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n \t\t| b_expr '^'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, NULL); }\n \t\t| b_expr '+' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", $1, $3); }\n \t\t| b_expr '-' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"-\", $1, $3); }\n \t\t| b_expr '/' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"/\", $1, $3); }\n \t\t| b_expr '%' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, $3); }\n- \t\t| b_expr '*' b_expr\n- \t\t\t\t{\t$$ = makeA_Expr(OP, \"*\", $1, $3); }\n \t\t| b_expr '^' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, $3); }\n! \t\t| ':' b_expr\n! \t\t\t\t{\t$$ = makeA_Expr(OP, \":\", NULL, $2); }\n! \t\t| ';' b_expr\n! \t\t\t\t{\t$$ = makeA_Expr(OP, \";\", NULL, $2); }\n! \t\t| '|' b_expr\n! \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", NULL, $2); }\n \t\t| b_expr TYPECAST Typename\n \t\t\t\t{\n \t\t\t\t\t$$ = (Node *)$1;\n--- 4435,4466 ----\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", NULL, $2); }\n \t\t| '^' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", NULL, $2); }\n+ \t\t| '|' b_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", NULL, $2); }\n+ \t\t| ':' b_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \":\", NULL, $2); }\n+ \t\t| ';' b_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \";\", NULL, $2); }\n \t\t| b_expr '%'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, NULL); }\n \t\t| b_expr '^'\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, NULL); }\n+ \t\t| b_expr '|'\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", $1, NULL); }\n \t\t| b_expr '+' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"+\", $1, $3); }\n \t\t| b_expr '-' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"-\", $1, $3); }\n+ \t\t| b_expr '*' b_expr\n+ \t\t\t\t{\t$$ = makeA_Expr(OP, \"*\", $1, $3); }\n \t\t| b_expr '/' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"/\", $1, $3); }\n \t\t| b_expr '%' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"%\", $1, $3); }\n \t\t| b_expr '^' b_expr\n \t\t\t\t{\t$$ = makeA_Expr(OP, \"^\", $1, $3); }\n! \t\t| b_expr '|' b_expr\n! \t\t\t\t{\t$$ = makeA_Expr(OP, \"|\", $1, $3); }\n \t\t| b_expr TYPECAST Typename\n \t\t\t\t{\n \t\t\t\t\t$$ = (Node *)$1;",
"msg_date": "Tue, 28 Sep 1999 11:12:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Darn, back to cvs merge hell...\n\nYes, Bruce seems to be catching up on his patch queue --- and applying\na lot of old code that needs changes :-(.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Sep 1999 11:15:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions "
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > Darn, back to cvs merge hell...\n> \n> Yes, Bruce seems to be catching up on his patch queue --- and applying\n> a lot of old code that needs changes :-(.\n\nTrue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 11:19:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > Darn, back to cvs merge hell...\n> \n> Yes, Bruce seems to be catching up on his patch queue --- and applying\n> a lot of old code that needs changes :-(.\n\nYes. Good news is that I am done.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 11:19:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Operator definitions"
}
] |
[
{
"msg_contents": "Someone mentioned that it took them quite a while to compile the\nPostgreSQL code. My wallclock time is 3:52 for a compile with -O1 using\ngcc 2.7.2.1. This is on a dual-PII 350MHz running BSD/OS 4.01.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Sep 1999 16:16:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compile timing"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Someone mentioned that it took them quite a while to compile the\n> PostgreSQL code. My wallclock time is 3:52 for a compile with -O1 using\n> gcc 2.7.2.1. This is on a dual-PII 350MHz running BSD/OS 4.01.\n\nSomeone is me. Someone only has a Pentium 133 with 256K L2 cache and\n32MB RAM. Someone is also running KDE/X on this laptop. \n\nUsing egcs 1.1.2 under Linux 2.2.5.(RedHat 6.0).\n\nYou have a machine that is 5 times faster than mine. Also, this is more\nthan a compile -- this is a whole sequence of events -- cleaning out the\nold build directory, unpacking the tarball, applying patches,\nconfigure;make;make install, some other sequences of events, and then\ncpio'ing and compressing to build several rpm's. So, about two thirds\nof the time is actually spent compiling, which is still a little slow\ncompared to your result.\n\nIf my machine compiled PostgreSQL as fast as yours, I'd be one happy\ncamper!\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 22 Sep 1999 17:08:47 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "Lamar Owen wrote:\n\n>\n> Bruce Momjian wrote:\n> >\n> > Someone mentioned that it took them quite a while to compile the\n> > PostgreSQL code. My wallclock time is 3:52 for a compile with -O1 using\n> > gcc 2.7.2.1. This is on a dual-PII 350MHz running BSD/OS 4.01.\n\nHmmm,\n\n Is there something wrong with your system, Bruce? My 64MB\n 333MHz singe-PII (same gcc version under Linux 2.2.10) does a\n -O1 clean-compile in 3:28.\n\n Maybe the SMP overhead is eating up the missing cycles. You\n would like a parallelized make to outperform me again - no?\n\n> You have a machine that is 5 times faster than mine. Also, this is more\n> than a compile -- this is a whole sequence of events -- cleaning out the\n> old build directory, unpacking the tarball, applying patches,\n> configure;make;make install, some other sequences of events, and then\n> cpio'ing and compressing to build several rpm's. So, about two thirds\n> of the time is actually spent compiling, which is still a little slow\n> compared to your result.\n\n Lamar, shouldn't you run at least the regression suite too\n before building the rpm's?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 02:22:54 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "> Lamar Owen wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > > Someone mentioned that it took them quite a while to compile the\n> > > PostgreSQL code. My wallclock time is 3:52 for a compile with -O1 using\n> > > gcc 2.7.2.1. This is on a dual-PII 350MHz running BSD/OS 4.01.\n> \n> Hmmm,\n> \n> Is there something wrong with your system, Bruce? My 64MB\n> 333MHz singe-PII (same gcc version under Linux 2.2.10) does a\n> -O1 clean-compile in 3:28.\n> \n> Maybe the SMP overhead is eating up the missing cycles. You\n> would like a parallelized make to outperform me again - no?\n\nI knew someone would find this an interesting topic.\n\nI should also mention I have 256MB of RAM, and Baracuda SCSI-Ultra\ndrives with tagged queuing enabled.\n\nOK, I turned off my custom flags, and got for -O1:\n\n\treal 3m8.080s\n\tuser 2m21.752s\n\tsys 0m35.291s\n\nI usually do:\n\n\tCUSTOM_COPT=-g -Wall -O1 -Wmissing-prototypes -Wmissing-declarations\n\nMy bet is the symbol output takes some time to produce. I noticed the\nlink of the postgres binary was faster without -g.\n\nWith parallelization, using gmake -j2, I got:\n\n\treal 3m8.980s\n\tuser 2m23.442s\n\tsys 0m36.142s\n\nNot sure why -j2 is not faster than normal -j, unless gmake knows to use\n-j2 on a 2-cpu system by default. Looking at the xps output, I don't\nsee multiple compiles being performed by gmake.\n\nGmake -j fails because compiles happen before supporting files are\ncreated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Sep 1999 23:01:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "> Not sure why -j2 is not faster than normal -j...\n\nI was just looking at this a little while ago at work. It is not\nfaster because gmake does not propagate the \"-j2\" flag to submakes, on\nthe (correct) theory that you might get a geometrically growing system\nload, rather than just keeping two makes running through all the\nsubdirectories.\n\nThis is the behavior of \"-j\", unless you specify it without a numeric\nparameter, in which case it *does* allow parallel submakes.\n\nThe first time I tried \"-j\", I did it without reading the man pages\nand without specifying a numeric parameter. It did a magnificent job\nof bringing down my system trying to build ACE/TAO, a *large* Corba\npackage. Chewed up all of real memory, then all of swap; not sure if I\nran out of process slots or memory first but it wasn't pretty. It was\n*very* fast though :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 23 Sep 1999 05:38:55 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "> With parallelization, using gmake -j2, I got:\n>\n> real 3m8.980s\n> user 2m23.442s\n> sys 0m36.142s\n>\n> Not sure why -j2 is not faster than normal -j, unless gmake knows to use\n> -j2 on a 2-cpu system by default. Looking at the xps output, I don't\n> see multiple compiles being performed by gmake.\n\n Because it hasn't prallelized :-)\n\n The -j2 wasn't handed down to the submakes and I'm sure\n there's only one started from the top.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 09:51:08 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "> > Not sure why -j2 is not faster than normal -j...\n> \n> I was just looking at this a little while ago at work. It is not\n> faster because gmake does not propagate the \"-j2\" flag to submakes, on\n> the (correct) theory that you might get a geometrically growing system\n> load, rather than just keeping two makes running through all the\n> subdirectories.\n> \n> This is the behavior of \"-j\", unless you specify it without a numeric\n> parameter, in which case it *does* allow parallel submakes.\n> \n> The first time I tried \"-j\", I did it without reading the man pages\n> and without specifying a numeric parameter. It did a magnificent job\n> of bringing down my system trying to build ACE/TAO, a *large* Corba\n> package. Chewed up all of real memory, then all of swap; not sure if I\n> ran out of process slots or memory first but it wasn't pretty. It was\n> *very* fast though :)\n> \n\nYes, make -j without a number does so many makes here the compile fails\ntoo, and the load average soars.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 10:42:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "Jan Wieck wrote:\n> Lamar, shouldn't you run at least the regression suite too\n> before building the rpm's?\n\nWell, once the compile passes regression on a particular architecture,\nIMO it doesn't need to be done for every compile of that version of\nPostgreSQL.\n\nHowever, as part of the testing for the built RPM's, I do run the\nregression tests (which I have packaged into the RPM set --\npostgresql-test) before releasing. The regression tests don't take too\nlong (unless I run bigtest).\n\nRunning regression as part of the RPM build is a possibility, however.\n\nAs it stands, Intel fails two tests (float8 and geometry) and Alpha\nfails two tests (geometry and another one that I can't remember right\nnow) -- one of which is due to the documented problem with sort order on\nthe Alpha (Uncle George has thoroughly covered that topic, recently).\n\nThe RPM building development cycle is a little different than most. \nRPM's are built in a fully automatic fashion -- a single command\ninvocation (rpm -ba postgresql.spec) compiles, mungs, and packages all\nthe way to the binary RPM's, then it cleans up. Getting to that point,\nhowever, can be a challenge, as some patches are necessary to get a\nbuild in the FHS-compliant RedHat environment. It took me about 2 hours\nto get a good build of 6.5.2, due to the need for a couple of Makefile\npatches in the perl client (in particular, the src/interfaces Makefile\nissues a 'perl5 makefile.pl' command, when there is no executable on\nRedHat 6 named perl5), along with some other munging that had to be\ndone.\n\nI have to think in the mindset of a packager, not a developer, when\ndoing this -- but I have to also keep up with development in order to\npackage. And I LOVE it!\n\nSo, in short, every binary RPM set I build for RedHat 6 has been\ninstalled on my personal laptop running a close to virgin RedHat 6\ninstallation -- and has had regression run. The set built for RedHat\n5.2 has had the same thing done on my inhouse utility machine, which is\na puny little machine (486/100 with 16MB). It takes almost two hours to\nbuild the binary RPM set on that machine. But, then again, that is also\nmy amanda server and is quite loaded.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Thu, 23 Sep 1999 10:49:38 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile timing"
},
{
"msg_contents": "> > Not sure why -j2 is not faster than normal -j...\n> \n> I was just looking at this a little while ago at work. It is not\n> faster because gmake does not propagate the \"-j2\" flag to submakes, on\n> the (correct) theory that you might get a geometrically growing system\n> load, rather than just keeping two makes running through all the\n> subdirectories.\n> \n> This is the behavior of \"-j\", unless you specify it without a numeric\n> parameter, in which case it *does* allow parallel submakes.\n> \n> The first time I tried \"-j\", I did it without reading the man pages\n> and without specifying a numeric parameter. It did a magnificent job\n> of bringing down my system trying to build ACE/TAO, a *large* Corba\n> package. Chewed up all of real memory, then all of swap; not sure if I\n> ran out of process slots or memory first but it wasn't pretty. It was\n> *very* fast though :)\n\nI just tried:\n\n\tgmake MAKE=\"gmake -j 2\"\n\nand that fails because we can't parellize because we need certain\nincludes. I can't seem to get the proper includes to happen before it\nfails.\n\nI am getting:\n\ngmake[3]: Entering directory\n`/var/local/src/pgsql/CURRENT/pgsql/src/backend/access/common'\ngmake[3]: *** No rule to make target `hash/SUBSYS.o'. Stop.\ngmake[3]: Leaving directory `/var/local/src/pgsql/CURRENT/pgsql/src/backend/acce\n\nSeems it is trying to complete the linking before the compiles are done.\nIf I made -j2 happen only in directories with compiles, and not outside,\nthat might fix it. The propogation of -j2 to subdirectories and the\nexponential explosion is exactly what happens.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 10:59:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Compile timing"
}
] |
[
{
"msg_contents": "I have been finding a lot of interesting stuff while looking into\nthe buffer reference count/leakage issue.\n\nIt turns out that there were two specific things that were camouflaging\nthe existence of bugs in this area:\n\n1. The BufferLeakCheck routine that's run at transaction commit was\nonly looking for nonzero PrivateRefCount to indicate a missing unpin.\nIt failed to notice nonzero LastRefCount --- which meant that an\nerror in refcount save/restore usage could leave a buffer pinned,\nand BufferLeakCheck wouldn't notice.\n\n2. The BufferIsValid macro, which you'd think just checks whether\nit's handed a valid buffer identifier or not, actually did more:\nit only returned true if the buffer ID was valid *and* the buffer\nhad positive PrivateRefCount. That meant that the common pattern\n\tif (BufferIsValid(buf))\n\t\tReleaseBuffer(buf);\nwouldn't complain if it were handed a valid but already unpinned buffer.\nAnd that behavior masks bugs that result in buffers being unpinned too\nearly. For example, consider a sequence like\n\n1. LockBuffer (buffer now has refcount 1). Store reference to\n a tuple on that buffer page in a tuple table slot.\n2. Copy buffer reference to a second tuple-table slot, but forget to\n increment buffer's refcount.\n3. Release second tuple table slot. Buffer refcount drops to 0,\n so it's unpinned.\n4. Release original tuple slot. Because of BufferIsValid behavior,\n no assert happens here; in fact nothing at all happens.\n\nThis is, of course, buggy code: during the interval from 3 to 4 you\nstill have an apparently valid tuple reference in the original slot,\nwhich someone might try to use; but the buffer it points to is unpinned\nand could be replaced at any time by another backend.\n\nIn short, we had errors that would mask both missing-pin bugs and\nmissing-unpin bugs. And naturally there were a few such bugs lurking\nbehind them...\n\n3. The buffer refcount save/restore stuff, which I had suspected\nwas useless, is not only useless but also buggy. The reason it's\nbuggy is that it only works if used in a nested fashion. You could\nsave state A, pin some buffers, save state B, pin some more\nbuffers, restore state B (thereby unpinning what you pinned since\nthe save), and finally restore state A (unpinning the earlier stuff).\nWhat you could not do is save state A, pin, save B, pin more, then\nrestore state A --- that might unpin some of A's buffers, or some\nof B's buffers, or some unforeseen combination thereof. If you\nrestore A and then restore B, you do not necessarily return to a zero-\npins state, either. And it turns out the actual usage pattern was a\nnearly random sequence of saves and restores, compounded by a failure to\ndo all of the restores reliably (which was masked by the oversight in\nBufferLeakCheck).\n\n\nWhat I have done so far is to rip out the buffer refcount save/restore\nsupport (including LastRefCount), change BufferIsValid to a simple\nvalidity check (so that you get an assert if you unpin something that\nwas pinned), change ExecStoreTuple so that it increments the refcount\nwhen it is handed a buffer reference (for symmetry with ExecClearTuple's\ndecrement of the refcount), and fix about a dozen bugs exposed by these\nchanges.\n\nI am still getting Buffer Leak notices in the \"misc\" regression test,\nspecifically in the queries that invoke more than one SQL function.\nWhat I find there is that SQL functions are not always run to\ncompletion. Apparently, when a function can return multiple tuples,\nit won't necessarily be asked to produce them all. And when it isn't,\npostquel_end() isn't invoked for the function's current query, so its\ntuple table isn't cleared, so we have dangling refcounts if any of the\ntuples involved are in disk buffers.\n\nIt may be that the save/restore code was a misguided attempt to fix\nthis problem. I can't tell. But I think what we really need to do is\nfind some way of ensuring that Postquel function execution contexts\nalways get shut down by the end of the query, so that they don't leak\nresources.\n\nI suppose a straightforward approach would be to keep a list of open\nfunction contexts somewhere (attached to the outer execution context,\nperhaps), and clean them up at outer-plan shutdown.\n\nWhat I am wondering, though, is whether this addition is actually\nnecessary, or is it a bug that the functions aren't run to completion\nin the first place? I don't really understand the semantics of this\n\"nested dot notation\". I suppose it is a Berkeleyism; I can't find\nanything about it in the SQL92 document. The test cases shown in the\nmisc regress test seem peculiar, not to say wrong. For example:\n\nregression=> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name FROM person p;\nname |name |name\n-------------+-----------+-----\nadvil |posthacking|mike\npeet's coffee|basketball |joe\nhightops |basketball |sally\n(3 rows)\n\nwhich doesn't appear to agree with the contents of the underlying\nrelations:\n\nregression=> SELECT * FROM hobbies_r;\nname |person\n-----------+------\nposthacking|mike\nposthacking|jeff\nbasketball |joe\nbasketball |sally\nskywalking |\n(5 rows)\n\nregression=> SELECT * FROM equipment_r;\nname |hobby\n-------------+-----------\nadvil |posthacking\npeet's coffee|posthacking\nhightops |basketball\nguts |skywalking\n(4 rows)\n\nI'd have expected an output along the lines of\n\nadvil |posthacking|mike\npeet's coffee|posthacking|mike\nhightops |basketball |joe\nhightops |basketball |sally\n\nIs the regression test's expected output wrong, or am I misunderstanding\nwhat this query is supposed to do? Is there any documentation anywhere\nabout how SQL functions returning multiple tuples are supposed to\nbehave?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 20:05:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "Tom Lane wrote:\n\n> [...]\n>\n> What I am wondering, though, is whether this addition is actually\n> necessary, or is it a bug that the functions aren't run to completion\n> in the first place? I don't really understand the semantics of this\n> \"nested dot notation\". I suppose it is a Berkeleyism; I can't find\n> anything about it in the SQL92 document. The test cases shown in the\n> misc regress test seem peculiar, not to say wrong. For example:\n>\n> [...]\n>\n> Is the regression test's expected output wrong, or am I misunderstanding\n> what this query is supposed to do? Is there any documentation anywhere\n> about how SQL functions returning multiple tuples are supposed to\n> behave?\n\n I've said some time (maybe too long) ago, that SQL functions\n returning tuple sets are broken in general. This nested dot\n notation (which I think is an artefact from the postquel\n querylanguage) is implemented via set functions.\n\n Set functions have total different semantics from all other\n functions. First they don't really return a tuple set as\n someone might think - all that screwed up code instead\n simulates that they return something you could consider a\n scan of the last SQL statement in the function. Then, on\n each subsequent call inside of the same command, they return\n a \"tupletable slot\" containing the next found tuple (that's\n why their Func node is mangled up after the first call).\n\n Second they have a targetlist what I think was originally\n intended to extract attributes out of the tuples returned\n when the above scan is asked to get the next tuple. But as I\n read the code it invokes the function again and this might\n cause the resource leakage you see.\n\n Third, all this seems to never have been implemented\n (thought?) to the end. A targetlist doesn't make sense at\n this place because it could at max contain a single attribute\n - so a single attno would have the same power. And if set\n functions could appear in the rangetable (FROM clause), than\n they would be treated as that and regular Var nodes in the\n query would do it.\n\n I think you shouldn't really care for that regression test\n and maybe we should disable set functions until we really\n implement stored procedures returning sets in the rangetable.\n\n Set functions where planned by Stonebraker's team as\n something that today is called stored procedures. But AFAIK\n they never reached the useful state because even in Postgres\n 4.2 you haven't been able to get more than one attribute out\n of a set function. It was a feature of the postquel\n querylanguage that you could get one attribute from a set\n function via\n\n RETRIEVE (attributename(setfuncname()))\n\n While working on the constraint triggers I've came across\n another regression test (triggers :-) that's errorneous too.\n The funny_dup17 trigger proc executes an INSERT into the same\n relation where it get fired for by a previous INSERT. And it\n stops this recursion only if it reaches a nesting level of\n 17, which could only occur if it is fired DURING the\n execution of it's own SPI_exec(). After Vadim quouted some\n SQL92 definitions about when constraint checks and triggers\n are to be executed, I decided to fire regular triggers at the\n end of a query too. Thus, there is absolutely no nesting\n possible for AFTER triggers resulting in an endless loop.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 03:19:39 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "I wrote:\n> What I have done so far is to rip out the buffer refcount save/restore\n> support (including LastRefCount), change BufferIsValid to a simple\n> validity check (so that you get an assert if you unpin something that\n> was pinned), ...\n\ner, make that \"unpin something that *wasn't* pinned\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 1999 21:22:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Progress report: buffer refcount bugs and SQL functions "
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Tom Lane wrote:\n>> What I am wondering, though, is whether this addition is actually\n>> necessary, or is it a bug that the functions aren't run to completion\n>> in the first place?\n\n> I've said some time (maybe too long) ago, that SQL functions\n> returning tuple sets are broken in general.\n\nIndeed they are. Try this on for size (using the regression database):\n\n\tSELECT p.name, p.hobbies.equipment.name FROM person p;\n\tSELECT p.hobbies.equipment.name, p.name FROM person p;\n\nYou get different result sets!?\n\nThe problem in this example is that ExecTargetList returns the isDone\nflag from the last targetlist entry, regardless of whether there are\nincomplete iterations in previous entries. More generally, the buffer\nleak problem that I started with only occurs if some Iter nodes are not\nrun to completion --- but execQual.c has no mechanism to make sure that\nthey have all reached completion simultaneously.\n\nWhat we really need to make functions-returning-sets work properly is\nan implementation somewhat like aggregate functions. We need to make\na list of all the Iter nodes present in a targetlist and cycle through\nthe values returned by each in a methodical fashion (run the rightmost\nthrough its full cycle, then advance the next-to-rightmost one value,\nrun the rightmost through its cycle again, etc etc). Also there needs\nto be an understanding of the hierarchy when an Iter appears in the\narguments of another Iter's function. (You cycle the upper one for\n*each* set of arguments created by cycling its sub-Iters.)\n\nI am not particularly interested in working on this feature right now,\nsince AFAIK it's a Berkeleyism not found in SQL92. What I've done\nis to hack ExecTargetList so that it behaves semi-sanely when there's\nmore than one Iter at the top level of the target list --- it still\ndoesn't really give the right answer, but at least it will keep\ngenerating tuples until all the Iters are done at the same time.\nIt happens that that's enough to give correct answers for the examples\nshown in the misc regress test. Even when it fails to generate all\nthe possible combinations, there will be no buffer leaks.\n\nSo, I'm going to declare victory and go home ;-). We ought to add a\nTODO item along the lines of\n * Functions returning sets don't really work right\nin hopes that someone will feel like tackling this someday.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 10:18:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "Tom Lane wrote:\n\n>\n> [email protected] (Jan Wieck) writes:\n> > Tom Lane wrote:\n>\n> What we really need to make functions-returning-sets work properly is\n> an implementation somewhat like aggregate functions. We need to make\n> a list of all the Iter nodes present in a targetlist and cycle through\n> the values returned by each in a methodical fashion (run the rightmost\n> through its full cycle, then advance the next-to-rightmost one value,\n> run the rightmost through its cycle again, etc etc). Also there needs\n> to be an understanding of the hierarchy when an Iter appears in the\n> arguments of another Iter's function. (You cycle the upper one for\n> *each* set of arguments created by cycling its sub-Iters.)\n\n Shouldn't a function returning a SET of tuples cause a proper\n join?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 17:28:51 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "> So, I'm going to declare victory and go home ;-). We ought to add a\n> TODO item along the lines of\n> * Functions returning sets don't really work right\n> in hopes that someone will feel like tackling this someday.\n\nAdded to TODO, with your e-mail messages archived:\n\n* Functions returning sets don't really work right(see TODO.detail/functions)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Sep 1999 11:31:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> Shouldn't a function returning a SET of tuples cause a proper\n> join?\n\nJoin on what? The semantics suggested by the existing regress tests\n(for lack of any actual documentation :-() certainly appear to be\nstraight Cartesian product.\n\nAnyway, I have no intention of spending more time on this feature now.\nThere's lots of higher-priority problems...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 17:42:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
}
] |
[
{
"msg_contents": "Me again,\n\n I'm just collecting info's for later. What I need to know is\n how PK/FK constraints are defined in the standard.\n\n Is it ALLWAYS the case, that a FK constraint refers to the PK\n of another table? Or could arbitraty attributes of another\n table be referenced by a FK too?\n\n Is it guaranteed that I find the PK definition of a table\n allways in the index <tablename>_pkey? If so it would be nice\n to ensure that an index with that name created manually is\n defined unique and/or cannot be created/dropped explicitly -\n this is important for RI.\n\n Another (my preferred) way would be to name the automatically\n created PK index something like \"pg_pkey_<tableoid>\". This\n would have the advantage that we never run out of 32 char\n limit on name and that the user cannot create/drop this index\n by hand.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 04:17:37 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Another RI question"
}
] |
[
{
"msg_contents": "unsubscribe\n",
"msg_date": "Wed, 22 Sep 1999 22:45:55 -0700",
"msg_from": "Adam Haberlach <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": ">>>unsubscribe\[email protected]: error: no such command 'unsubscribe'.\nTry 'unsubscribe please'\n \n> ************\n\n",
"msg_date": "Thu, 23 Sep 1999 13:42:32 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\nI tray create new function bye read from file.\n\nWhen I read this file I have errors\n\npgReadData()- backend closed the channel unexpectedly\n\nin may log file :\nFatal 1 : btree: cannot split if start (2) >= maxoff (2)\n\nor somethings like this:\nfatal 1: my bits moved right off the end of the world!\n\nPostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.31\non Debian Slink\n\nNeed help\n\n\nBest regards\n\n\n\n\n",
"msg_date": "Thu, 23 Sep 1999 10:05:15 +0200",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with new function"
}
] |
[
{
"msg_contents": "> Is the regression test's expected output wrong, or am I \n> misunderstanding\n> what this query is supposed to do? Is there any \n> documentation anywhere\n> about how SQL functions returning multiple tuples are supposed to\n> behave?\n\nThey are supposed to behave somewhat like a view.\nNot all rows are necessarily fetched.\nIf used in a context that needs a single row answer,\nand the answer has multiple rows it is supposed to \nruntime elog. Like in:\n\nselect * from tbl where col=funcreturningmultipleresults();\n-- this must elog\n\nwhile this is ok:\nselect * from tbl where col in (select funcreturningmultipleresults());\n\nBut the caller could only fetch the first row if he wanted.\n\nThe nested notation is supposed to call the function passing it the tuple\nas the first argument. This is what can be used to \"fake\" a column\nonto a table (computed column). \nThat is what I use it for. I have never used it with a \nreturns setof function, but reading the comments in the regression test,\n-- mike needs advil and peet's coffee,\n-- joe and sally need hightops, and\n-- everyone else is fine.\nit looks like the results you expected are correct, and currently the \nwrong result is given.\n\nBut I think this query could also elog whithout removing substantial\nfunctionality. \n\nSELECT p.name, p.hobbies.name, p.hobbies.equipment.name FROM person p;\n\nActually for me it would be intuitive, that this query return one row per \nperson, but elog on those that have more than one hobbie or a hobbie that \nneeds more than one equipment. Those that don't have a hobbie should \nreturn name|NULL|NULL. A hobbie that does'nt need equipment name|hobbie|NULL.\n\nAndreas\n",
"msg_date": "Thu, 23 Sep 1999 10:07:24 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
},
{
"msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n> That is what I use it for. I have never used it with a \n> returns setof function, but reading the comments in the regression test,\n> -- mike needs advil and peet's coffee,\n> -- joe and sally need hightops, and\n> -- everyone else is fine.\n> it looks like the results you expected are correct, and currently the \n> wrong result is given.\n\nYes, I have concluded the same (and partially fixed it, per my previous\nmessage).\n\n> Those that don't have a hobbie should return name|NULL|NULL. A hobbie\n> that does'nt need equipment name|hobbie|NULL.\n\nThat's a good point. Currently (both with and without my uncommitted\nfix) you get *no* rows out from ExecTargetList if there are any Iters\nthat return empty result sets. It might be more reasonable to treat an\nempty result set as if it were NULL, which would give the behavior you\nsuggest.\n\nThis would be an easy change to my current patch, and I'm prepared to\nmake it before committing what I have, if people agree that that's a\nmore reasonable definition. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 10:51:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions"
}
] |
[
{
"msg_contents": "\n\nI tray create new function bye read from file.\n\nWhen I read this file I have errors\n\npgReadData()- backend closed the channel unexpectedly\n\nin may log file :\nFatal 1 : btree: cannot split if start (2) >= maxoff (2)\n\nor somethings like this:\nfatal 1: my bits moved right off the end of the world!\n\nPostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.31\non Debian Slink\n\nNeed help\n\n\nBest regards\n\n\n\n\n\n",
"msg_date": "Thu, 23 Sep 1999 10:11:55 +0200",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with new function"
},
{
"msg_contents": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]> writes:\n> in may log file :\n> Fatal 1 : btree: cannot split if start (2) >= maxoff (2)\n> or somethings like this:\n> fatal 1: my bits moved right off the end of the world!\n\nI think we fixed some bugs in that area in 6.5.2 --- please update\nand see if the problem is still there.\n\nNote you will probably need to dump / drop / restore your database\nto get that index back into an uncorrupted state :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 10:53:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with new function "
},
{
"msg_contents": ">\n> Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?= <[email protected]> writes:\n> > in may log file :\n> > Fatal 1 : btree: cannot split if start (2) >= maxoff (2)\n> > or somethings like this:\n> > fatal 1: my bits moved right off the end of the world!\n>\n> I think we fixed some bugs in that area in 6.5.2 --- please update\n> and see if the problem is still there.\n\n The pg_proc_prosrc_index (causing this failure) is still\n there. If I remember right, something on SQL functions\n requires this index to quickly find a particular function BY\n SOURCE. Seems a little screwed up.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 17:13:19 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with new function"
}
] |
[
{
"msg_contents": "\nI write to you few minuts ego.\n\nI show you two files\n\n-----------------------------------1---------------------------------\n--/***************************************************************************/\n\n--! 23.09.1999 Warszawa\n--% Grzegorz Przezdziecki\n--@ [email protected];[email protected]\n--# CREATE FUNCTION fu_idklienci() RETURNS OPAQUE AS'\n--$ funkcja bedzie wywololywana przez triger w momencie dodania nowego\ntelefonu\n--$ do tablei telefonow klienta\n--/***************************************************************************/\n\n\n\n--/***************************************************************************/\n\n-- FUNKCJA PRZED INSERTEM DO TABELI telefon klienta\n--/***************************************************************************/\n\nCREATE FUNCTION fu_idklienci() RETURNS OPAQUE AS'\n DECLARE\n id int4;\n BEGIN\n\n--SPRAWDZAMY FIRME\n\n IF NEW.firma ISNULL THEN\n RAISE EXCEPTION ''Pole firma musi\nposiadac wartosc'';\n END IF;\n SELECT id_firmy INTO id FROM tb_firmy WHERE\nid_firmy = NEW.id_firma;\n IF NOT FOUND THEN\n RAISE EXCEPTION ''Brak firmy numer\n%'',NEW.id_firma;\n END If;\n\n\n NEW.ID_klienci:=nextval(''se_idklienci'');\n RETURN NEW;\n END;'\n LANGUAGE 'plpgsql';\n--/***************************************************************************/\n\n--/***************************************************************************/\n\nthis file makes errors\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible.\n Terminating.\nand in log file :\nSep 23 11:00:14 Databases logger: FATAL 1: btree: cannot split if start\n(2) >=\nmaxoff (2)\n\nsecond file is OK\n------------------------------------------2------------------------------------------\n\n--/***************************************************************************/\n\n--! 23.09.1999 Warszawa\n--% Grzegorz Przezdziecki\n--@ [email protected];[email protected]\n--# CREATE FUNCTION fu_idklienci() RETURNS OPAQUE AS'\n--$ funkcja bedzie wywololywana przez triger w momencie dodania nowego\ntelefonu\n--$ do tablei telefonow klienta\n--/***************************************************************************/\n\n\n\n--/***************************************************************************/\n\n-- FUNKCJA PRZED INSERTEM DO TABELI telefon klienta\n--/***************************************************************************/\n\nCREATE FUNCTION fu_idklienci() RETURNS OPAQUE AS'\n DECLARE\n id int4;\n BEGIN\n\n\n IF NEW.firma ISNULL THEN\n RAISE EXCEPTION ''Pole firma musi\nposiadac wartosc'';\n END IF;\n SELECT id_firmy INTO id FROM tb_firmy WHERE\nid_firmy = NEW.id_firma;\n IF NOT FOUND THEN\n RAISE EXCEPTION ''Brak firmy numer\n%'',NEW.id_firma;\n END If;\n\n\n NEW.ID_klienci:=nextval(''se_idklienci'');\n RETURN NEW;\n END;'\n LANGUAGE 'plpgsql';\n--/***************************************************************************/\n\n--/***************************************************************************/\n\n\n\nDiference is in line\n\n--SPRAWDZAMY FIRME\n\nI use\nZed editor\n[PostgreSQL 6.5.1 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3]\non Debian\nkernel 2.0.36\n\nWhat is means this error\nSep 23 11:00:14 Databases logger: FATAL 1: btree: cannot split if start\n(2) >=\nmaxoff (2)\n\nBest regards\n\n",
"msg_date": "Thu, 23 Sep 1999 10:54:58 +0200",
"msg_from": "Grzegorz =?iso-8859-2?Q?Prze=BCdziecki?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re Problem with new function"
},
{
"msg_contents": "> Diference is in line\n> --SPRAWDZAMY FIRME\n> What is means this error\n> Sep 23 11:00:14 Databases logger: FATAL 1: btree: cannot split\n> if start (2) >= maxoff (2)\n\nI'm guessing that you have exceeded the requirement that a tuple\n(index tuples only? but I don't know why this would be indexed) must\nfit on half of a page. Try taking out more whitespace (in particular\nthe large spaced indents), and perhaps you can put the comment back\nin.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 23 Sep 1999 13:46:20 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re Problem with new function"
}
] |
[
{
"msg_contents": "Hi!\n\nMy problem is that I am defining a user data type that stores some\ninformation in a large object (inversion object). If I compile the code\nof such data type as part of a stand alone program (comunicating with\nthe PostgreSQL server usign libpq), I am able to create the large object\nbut I get an error when trying to open it.\nAfter searching in recopilations of newsgroups emails, I know that in\nthis case what I need to do is to execute the \"begin\" command (using\npqexec()) before opening the large object, because any access to a large\nobject must be enclosed into a transaction. If I do so, I get my data\ntypes working propertly and being able to open the large objects that\nthey use.\nBut my problem is that I do not want to use my data types in a\nstand alone program, but added to the PostgreSQL server as new user data\ntypes. An the problem in this case is that I can not execute the \"begin\"\ncommand in the code of my data types in this context: first because it\nwould be definetly wrong if the code of the data type defines where\nshould start or end a transaction (the program accessing the database or\nthe user if working interactively are the only that should do it), and\nsecond because in this case the call to pgexec() with the command\n\"begin\" fails (with some error mesage saying that there was a parsing\nerror).\n\nSo, my question is: What do I need to do for solve this problen and\nbeing able to use large objects in my user data types, taking into\naccount that their code should be executed in the server side?\n\nThanks in advance,\n Tony.\n\n-- \nJose Antonio Cotelo Lema. | [email protected]\nPraktische Informatik IV. Fernuniversitaet Hagen.\nD-58084 Hagen. Germany.\n",
"msg_date": "Thu, 23 Sep 1999 11:13:02 +0200",
"msg_from": "Jose Antonio Cotelo lema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems when opening large objects in the server side."
}
] |
[
{
"msg_contents": "> Is it ALLWAYS the case, that a FK constraint refers to the PK\n> of another table? Or could arbitraty attributes of another\n> table be referenced by a FK too?\n\narbitrary (usually unique indexed) columns\n\n> Is it guaranteed that I find the PK definition of a table\n> allways in the index <tablename>_pkey?\n\nNo. I think there is a column in pg_index that marks a pk already.\n(for odbc) This would imho be the best way.\n\n> Another (my preferred) way would be to name the automatically\n> created PK index something like \"pg_pkey_<tableoid>\". This\n\nYou want to have the ability to:\n1. create table\n2. create unique index\n3. alter table add constraint primary key (uses existing index)\n\nThe automatic naming should be irrelevant. \n\nAndreas\n",
"msg_date": "Thu, 23 Sep 1999 11:20:38 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Another RI question"
},
{
"msg_contents": "Andreas Zeugswetter wrote:\n\n>\n> > Is it ALLWAYS the case, that a FK constraint refers to the PK\n> > of another table? Or could arbitraty attributes of another\n> > table be referenced by a FK too?\n>\n> arbitrary (usually unique indexed) columns\n\n NOOOO! It will be too bad if the referenced PK isn't unique\n indexed! An ON DELETE CASCADE constraint will fire a trigger\n to delete all the rows where FK equals deleted PK. But this\n shouldn't happen if PK isn't guaranteed to be unique, instead\n it must check if another row with same PK still exists.\n\n And it is absolutely damned for the DELETE,INSERT situation.\n How should I be able to see that this happened and suppress\n the triggers on DELETE/INSERT though? I think I can't.\n\n Thus, the sequence\n\n BEGIN;\n DELETE PK;\n INSERT same PK\n COMMIT;\n\n where FK's with ON DELETE CASCADE exist will delete them if\n the constraint has been set to IMMEDIATE. No chance to\n prevent except we add a non-standard feature \"NOT\n IMMEDIATEABLE\" to constraints so these triggers will allways\n be fired at transaction commit.\n\n And the INITIAL DEFERRED trigger doing the ON DELETE CASCADE\n must check if at the time it's called really no such PK\n exists any more. These generic RI-trigger proc's will be\n sophisticated, man.\n\n>\n> > Is it guaranteed that I find the PK definition of a table\n> > allways in the index <tablename>_pkey?\n>\n> No. I think there is a column in pg_index that marks a pk already.\n> (for odbc) This would imho be the best way.\n\n Ah - yes. It's pg_index.indisprimary - thanks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 15:37:59 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Another RI question"
}
] |
[
{
"msg_contents": "> It's me who made the patch. Yutaka Tanida also provided a patch\n> for cygipc library to prevent lock freezing by changing the\n> implementation of semaphore.\n> These patches are necessary to prevent freezing in cygwin port. \n> \n> If there's no objection,I would add a new ipc.patch provided by \n> Yutaka into src/win32 and commit the patch for README.NT\n> for current tree.\n\nI still don't have any reaction from the cygipc author so we should include\nthe patch into pgsql sources.\n\n\t\t\tDan\n",
"msg_date": "Thu, 23 Sep 1999 12:33:36 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] IPC on win32 - additions for 6.5.2 and current tree\n\ts"
}
] |
[
{
"msg_contents": "Hi, all\n\nFor those of us who are working on the parser, and compiler, 'A Guide to Lex\n& Yacc' by Thomas Niemann should probably be prescribed reading (if you\nhaven't already read it). This is a really constructive guide to using the\ntwo together, and very informative.\n\nHowever, the most interesting part that I noticed is on the second page,\nunder the 'Other Titles' section. It's called 'Operator-Precedence\nParsing'. I haven't yet managed to get to it, because the web server (or my\nbrowser, I'm not sure yet) keeps hooching over the page, however, I'll put\nmoney on the fact that it will provide us with some insight into solving the\ncurrent operator problem(s?) that we have (see previous postings titled\n'Status Report: long query string changes' and \"Postgres' lexer\"). I will\ntry to get it. If anybody wants a copy and is too lazy to go to the web\nsite, let me know and I'll mail you a copy.\n\nMikeA\n",
"msg_date": "Thu, 23 Sep 1999 16:25:51 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lexxing and yaccing..."
}
] |
[
{
"msg_contents": "One more question:\n\n I'm planning to create generic trigger procs for PK/FK stuff.\n So that it's simply insert/delete the appropriate pg_trigger\n entries during CREATE/ALTER table.\n\n Assuming NULL's are allowed in FK values (are they?), I'd\n like to know what the correct handling of NULL values is. If\n an attribute of the FK has the NULL value, must a PK with a\n NULL in the corresponding attribute exist or is this\n attribute completely left out of the WHERE clause in the\n check?\n\n Other way round - NULL value in attribute of referenced\n table. What to delete from FK in the case of ON DELETE\n CASCADE?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 23 Sep 1999 17:07:13 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "RI and NULL's"
},
{
"msg_contents": "At 17:07 23/09/99 +0200, you wrote:\n>\n> Assuming NULL's are allowed in FK values (are they?)\n\nI don't think they should be since two null fields are not equal, and the\nreason for FK constraints to to require the foregn record exists. Also, I'm\npretty sure PK values should not be null.\n\n> like to know what the correct handling of NULL values is. If\n> an attribute of the FK has the NULL value, must a PK with a\n> NULL in the corresponding attribute exist or is this\n> attribute completely left out of the WHERE clause in the\n> check?\n\nI don't think so. I believe PK values can't be null, so no FK field should\nbe null.\n\n> Other way round - NULL value in attribute of referenced\n> table. What to delete from FK in the case of ON DELETE\n> CASCADE?\n\nThis problem goes away.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 24 Sep 1999 09:29:51 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] RI and NULL's"
}
] |
[
{
"msg_contents": "1. run:\n\nvacuum auth;\n\n> real problem is with a ~6000 row database and a select * ... \n> order by query\n> which takes more than 5 sec. The same query runs for less \n> than 0.1 sec on mssql\n> :-((\n\nNo way you select 6000 rows in 0.1 sec with mssql, \nthat would be 60000 rows/sec.\nMaybe you mean the first few rows show in 0.1s, this is possible.\n\nIn PostgreSQL the order by alone currently does not use the index.\nTry:\n\tselect * from auth where uid >= 0 order by uid;\n\nif you only have positive uid's. This should use the index.\n\nAndreas\n",
"msg_date": "Thu, 23 Sep 1999 18:47:55 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [GENERAL] when are indexes used?"
}
] |
[
{
"msg_contents": "I am adding this to the TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nHi all,\n\nI could create a 9-key index.\n\ncreate table ix9 (\ni1 int4,\ni2 int4,\ni3 int4,\ni4 int4,\ni5 int4,\ni6 int4,\ni7 int4,\ni8 int4,\ni9 int4,\nprimary key (i1,i2,i3,i4,i5,i6,i7,i8,i9)\n);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'ix9_pkey'\nfor table 'ix9'\nCREATE\n\n\\d ix9_pkey\n\nTable = ix9_pkey\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| i1 | int4 |\n4 |\n| i2 | int4 |\n4 |\n| i3 | int4 |\n4 |\n| i4 | int4 |\n4 |\n| i5 | int4 |\n4 |\n| i6 | int4 |\n4 |\n| i7 | int4 |\n4 |\n| i8 | int4 |\n4 |\n| i9 | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\nIs it right ?\n\nRegards.\n\nHiroshi Inoue\[email protected]",
"msg_date": "Thu, 23 Sep 1999 13:35:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] 9-key index ? (fwd)"
}
] |
[
{
"msg_contents": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]> writes:\n> YES!\n> that's what I'am looking for. But does anybody know HOW ?\n\n> Allows you to index only part of a table. Don't know any more.\n\nIt seems to be disabled in gram.y for some reason (no WHERE clause in\nCREATE INDEX anymore), which is odd since there's still an awful lot of\ncode to support the feature elsewhere.\n\nAnyone know who took this out and why?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 17:34:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] partial indexes (indices) "
}
] |
[
{
"msg_contents": "Hi\n\nI will admit I am getting reeeeeallllly frustrated right now. Currently\npostgresql is crashing approximately once every 5 minutes on me\n\ntemplate1=> select version();\nversion \n-------------------------------------------------------------------\nPostgreSQL 6.5.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\n\nI am not doing anything except vary basic commands, things like inserts\nand updates and nothing involving too many expressions.\n\nNow, I know nobody can debug anything from what I have just said, but I\ncannot get a better set of bug reports. I CANT get postgres to send out debug\n\nFor example. I start it using:\n\n/usr/bin/postmaster -o \"-F -S 10240\" -d 3 -S -N 512 -B 3000 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2> /tmp/postmastererr\n\n\nSpot in there I have -d 3 and redirect (this under /bin/sh) to /tmp\n\nNow, after repeated backend crashes, I have:\n\n[postgres@home bin]$ cat /tmp/postmastererr \nFindExec: found \"/usr/bin/postgres\" using argv[0]\nbinding ShmemCreate(key=52e2c1, size=31684608)\n[postgres@home bin]$ cat /tmp/postmasterout \n[postgres@home bin]$ \n\nOr exactly NOTHING\n\nThis is out of the box 6.5.2, no changes made, no changes made in the config\nexcept to make it install into the right place.\n\nI just need to get some debug, so I can actually report something. Am I\ndoing something very dumb, or SHOULD there be debug here and there isnt?\n\nI am about ready to pull my hair out over this. I NEED to have a stable\ndatabase, and crashing EVERY five minutes is not helping me at all {:-(\n\nAlso, I seem to remember that someone posted here that when one backend\ncrashed, it shouldnt close the other backends any more. Well, mine does.\n\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nI am getting this about every five minutes. I wish I knew what was doing it.\nEven if the backend recovered and just perforned the query again, that\nwould be enough, the overhead of checking to see if the database has crashed\nEVERY TIME I start or finish performing a query is a huge overhead.\n\nI can appreciate that the backend that crashed cannot do this, but the others\nsurely can! Rollback and start again, instead of rollback and panic\n\nAppologies if I sound a bit stressed right now, I was under the impression\nI had tested my system, and so I opened it to the public, and now it\nis blowing up in my face BADLY.\n\nIf someone can tell me WHAT I am doing wrong with getting the debug info,\nplease please do! I am just watching it blow up again as we speak, and I\nmust get SOMETHING fixed asap\n\n\t\t\t\t\t~Michael\n",
"msg_date": "Fri, 24 Sep 1999 01:41:59 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Frustration"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> Now, I know nobody can debug anything from what I have just said, but\n> I cannot get a better set of bug reports. I CANT get postgres to send\n> out debug\n\n> /usr/bin/postmaster -o \"-F -S 10240\" -d 3 -S -N 512 -B 3000 -D/var/lib/pgsql/data -o -F > /tmp/postmasterout 2> /tmp/postmastererr\n\nDon't use the -S switch (the second one, not the one inside -o).\n\nLooking in postmaster.c, I see that causes it to redirect stdout/stderr\nto /dev/null (probably not too hot an idea, but that's doubtless been\nlike that for a *long* time). Instead launch with something like\n\n\tnohup postmaster switches... </dev/null >logfile 2>errfile &\n\nGood luck figuring out where the real problem is...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 1999 20:57:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Frustration "
},
{
"msg_contents": "> Good luck figuring out where the real problem is...\n> \n> \t\t\tregards, tom lane\n\nWell, thanks to tom, I know what was wrong, and I have found the problem,\nor one of them at least...\n\nFATAL: s_lock(0c9ef824) at bufmgr.c:1106, stuck spinlock. Aborting.\n\nOkee, that segment of code is, well, its some deep down internals that\nare as clear as mud to me.\n\nAnyone in the know have an idea what this does?\n\nJust to save you looking, it is included below.\n\nOne question, is that does postgresql Inc have a 'normal person' support\nlevel? I ask that cos I was planning on getting some of the commercial\nsupport, and whilst it is a reasonable price to pay for corporations or\npeople with truckloads of money, I am a humble developer with more\nexpenses than income, and $600 is just way out of my league {:-(\n\nIf not, fair enough, just thought Id ask cos the support I have had from\nthis list is excellent and I wanted to provide some payback to the\ndevelopoment group.\n\n\t\t\t\t~Michael\n\n/*\n * WaitIO -- Block until the IO_IN_PROGRESS flag on 'buf'\n * is cleared. Because IO_IN_PROGRESS conflicts are\n * expected to be rare, there is only one BufferIO\n * lock in the entire system. All processes block\n * on this semaphore when they try to use a buffer\n * that someone else is faulting in. Whenever a\n * process finishes an IO and someone is waiting for\n * the buffer, BufferIO is signaled (SignalIO). All\n * waiting processes then wake up and check to see\n * if their buffer is now ready. This implementation\n * is simple, but efficient enough if WaitIO is\n * rarely called by multiple processes simultaneously.\n *\n * ProcSleep atomically releases the spinlock and goes to\n * sleep.\n *\n * Note: there is an easy fix if the queue becomes long.\n * save the id of the buffer we are waiting for in\n * the queue structure. That way signal can figure\n * out which proc to wake up.\n */\n#ifdef HAS_TEST_AND_SET\nstatic void\nWaitIO(BufferDesc *buf, SPINLOCK spinlock)\n{\n SpinRelease(spinlock);\n S_LOCK(&(buf->io_in_progress_lock));\n S_UNLOCK(&(buf->io_in_progress_lock));\n SpinAcquire(spinlock);\n}\n",
"msg_date": "Fri, 24 Sep 1999 02:54:39 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Frustration"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> Well, thanks to tom, I know what was wrong, and I have found the problem,\n> or one of them at least...\n> FATAL: s_lock(0c9ef824) at bufmgr.c:1106, stuck spinlock. Aborting.\n> Okee, that segment of code is, well, its some deep down internals that\n> are as clear as mud to me.\n\nHmph. Apparently, some backend was waiting for some other backend to\nfinish reading a page in or writing it out, and gave up after deciding\nit had waited an unreasonable amount of time (~ 1 minute, which does\nseem plenty long enough). Probably, the I/O did in fact finish, but\nthe waiting backend didn't get the word for some reason.\n\nIs it possible that there's something wrong with the spinlock code on\nyour hardware? There are a bunch of different spinlock implementations\n(assembly code for various hardware) in include/storage/s_lock.h and\nbackend/storage/buffer/s_lock.c. Some of 'em might not be as well\ntested as others. But you're on PC hardware, right? I would've thought\nthat flavor of the code would be pretty well wrung out.\n\nAnother likely explanation is that there's something wrong in\nbufmgr.c's logic for setting and releasing the io_in_progress lock ---\nbut a quick look doesn't show any obvious error, and I would have\nthought we'd have found out about any such problem long since.\nSince we're not being buried in reports of stuck-spinlock errors,\nI'm guessing there is some platform-specific problem on your machine.\nNo good ideas what it is if it isn't a spinlock failure.\n\n(Finally, are you sure this is the *only* indication of trouble in\nthe logs? If a backend crashed while holding the spinlock, the other\nones would eventually die with complaints like this, but that wouldn't\nmake the spinlock code be at fault...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Sep 1999 10:26:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Frustration "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Friday, September 24, 1999 11:27 PM\n> To: Michael Simms\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Frustration\n>\n>\n> Michael Simms <[email protected]> writes:\n> > Well, thanks to tom, I know what was wrong, and I have found\n> the problem,\n> > or one of them at least...\n> > FATAL: s_lock(0c9ef824) at bufmgr.c:1106, stuck spinlock. Aborting.\n> > Okee, that segment of code is, well, its some deep down internals that\n> > are as clear as mud to me.\n>\n> Hmph. Apparently, some backend was waiting for some other backend to\n> finish reading a page in or writing it out, and gave up after deciding\n> it had waited an unreasonable amount of time (~ 1 minute, which does\n> seem plenty long enough). Probably, the I/O did in fact finish, but\n> the waiting backend didn't get the word for some reason.\n>\n\n[snip]\n\n>\n> Another likely explanation is that there's something wrong in\n> bufmgr.c's logic for setting and releasing the io_in_progress lock ---\n> but a quick look doesn't show any obvious error, and I would have\n> thought we'd have found out about any such problem long since.\n> Since we're not being buried in reports of stuck-spinlock errors,\n> I'm guessing there is some platform-specific problem on your machine.\n> No good ideas what it is if it isn't a spinlock failure.\n>\n\nDifferent from other spinlocks,io_in_progress spinlock is a per bufpage\nspinlock and ProcReleaseSpins() doesn't release the spinlock.\nIf an error(in md.c in most cases) occured while holding the spinlock\n,the spinlock would necessarily freeze.\n\nMichael Simms says\n\tERROR: cannot read block 641 of server\noccured before the spinlock stuck abort.\n\nProbably it is an original cause of the spinlock freeze.\n\nHowever I don't understand the following status of his machine.\n\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda3 1109780 704964 347461 67% /\n/dev/hda1 33149 6140 25297 20% /boot\n/dev/hdc1 9515145 3248272 5773207 36% /home\n/dev/hdb1 402852 154144 227903 40% /tmp\n/dev/sda1 30356106785018642307 43892061535609608 0 100%\n/var/lib/pgsql\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 27 Sep 1999 09:13:38 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Frustration "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Different from other spinlocks,io_in_progress spinlock is a per bufpage\n> spinlock and ProcReleaseSpins() doesn't release the spinlock.\n> If an error(in md.c in most cases) occured while holding the spinlock\n> ,the spinlock would necessarily freeze.\n\nOooh, good point. Shouldn't this be fixed? If we don't fix it, then\na disk I/O error will translate to an installation-wide shutdown and\nrestart as soon as some backend tries to touch the locked page (as\nindeed was happening to Michael). That seems a tad extreme.\n\n> Michael Simms says\n> \tERROR: cannot read block 641 of server\n> occured before the spinlock stuck abort.\n> Probably it is an original cause of the spinlock freeze.\n\nI seem to have missed the message containing that bit of info,\nbut it certainly suggests that your diagnosis is correct.\n\n> However I don't understand the following status of his machine.\n> /dev/sda1 30356106785018642307 43892061535609608 0 100%\n\nNow that we know the root problem was disk driver flakiness, I think\nwe can write that off as Not Our Fault ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 1999 09:20:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Frustration "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, September 27, 1999 10:20 PM\n> To: Hiroshi Inoue\n> Cc: Michael Simms; [email protected]\n> Subject: Re: [HACKERS] Frustration \n> \n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Different from other spinlocks,io_in_progress spinlock is a per bufpage\n> > spinlock and ProcReleaseSpins() doesn't release the spinlock.\n> > If an error(in md.c in most cases) occured while holding the spinlock\n> > ,the spinlock would necessarily freeze.\n> \n> Oooh, good point. Shouldn't this be fixed? If we don't fix it, then\n\nYes,it's on TODO.\n* spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n\nI would try to fix it.\n \n> a disk I/O error will translate to an installation-wide shutdown and\n> restart as soon as some backend tries to touch the locked page (as\n> indeed was happening to Michael). That seems a tad extreme.\n> \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 28 Sep 1999 09:42:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Frustration "
}
] |
[
{
"msg_contents": "Do we have problems with int8 indexes? Seems select on an int8 does\nnot use an index.\n\nThis is PostgreSQL 6.5.2 on RedHat 6.0.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 24 Sep 1999 15:12:17 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "int8 and index"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tatsuo Ishii\n> Sent: Friday, September 24, 1999 3:12 PM\n> To: [email protected]\n> Subject: [HACKERS] int8 and index\n> \n> \n> Do we have problems with int8 indexes? Seems select on an int8 does\n> not use an index.\n>\n\nHow about select .. from .. where .. = ..::int8; ?\n\nWithout ::int8 PostgreSQL doesn't use int8 indexes.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Fri, 24 Sep 1999 15:32:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] int8 and index"
},
{
"msg_contents": ">> Do we have problems with int8 indexes? Seems select on an int8 does\n>> not use an index.\n>>\n>\n>How about select .. from .. where .. = ..::int8; ?\n>\n>Without ::int8 PostgreSQL doesn't use int8 indexes.\n\nOops. I forgot about that! Thanks.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 24 Sep 1999 15:36:37 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] int8 and index "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> How about select .. from .. where .. = ..::int8; ?\n>> \n>> Without ::int8 PostgreSQL doesn't use int8 indexes.\n\n> Oops. I forgot about that! Thanks.\n\nYes, this is on the TODO list (although I think TODO just mentions\nthe equivalent problem for int2).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Sep 1999 11:08:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] int8 and index "
},
{
"msg_contents": "> Do we have problems with int8 indexes? Seems select on an int8 does\n> not use an index.\n> \n> This is PostgreSQL 6.5.2 on RedHat 6.0.\n\nThat is strange. We have code to make indexes on int8 now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:05:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] int8 and index"
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> >> How about select .. from .. where .. = ..::int8; ?\n> >> \n> >> Without ::int8 PostgreSQL doesn't use int8 indexes.\n> \n> > Oops. I forgot about that! Thanks.\n> \n> Yes, this is on the TODO list (although I think TODO just mentions\n> the equivalent problem for int2).\n> \n\nint8 mention added.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:47:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] int8 and index"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, September 27, 1999 9:47 AM\n> To: Tom Lane\n> Cc: [email protected]; Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] int8 and index\n> \n> \n> > Tatsuo Ishii <[email protected]> writes:\n> > >> How about select .. from .. where .. = ..::int8; ?\n> > >> \n> > >> Without ::int8 PostgreSQL doesn't use int8 indexes.\n> > \n> > > Oops. I forgot about that! Thanks.\n> > \n> > Yes, this is on the TODO list (although I think TODO just mentions\n> > the equivalent problem for int2).\n> > \n> \n> int8 mention added.\n>\n\nThere may be a little difference.\n\nint4 -> int8 never fails.\nBut int4 -> int2 fails if abs(int4) > 32768.\n\nselect .. from .. where int2_column = 32769;\n\n\tshould return 0 rows or cause an elog(ERROR) ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 27 Sep 1999 10:08:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] int8 and index"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> int4 -> int8 never fails.\n> But int4 -> int2 fails if abs(int4) > 32768.\n\n> select .. from .. where int2_column = 32769;\n\n> \tshould return 0 rows or cause an elog(ERROR) ?\n\nShould return 0 rows, clearly. (That's what happens now, and I can\nsee no justification for doing otherwise.) When we add code to try to\ncoerce the constant to match the type of the column, we will have to\nwatch out for overflow and not do the coercion if so.\n\nWhat would be really way cool would be if the constant simplifier could\nrecognize that this condition is a constant FALSE, but that would\nprobably mean building in more knowledge about the semantics of\nspecific operators than is justified...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 1999 09:29:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] int8 and index "
}
] |
[
{
"msg_contents": "This modification eleminates the case described below:\nIf PostgreSQL (probaly with Linux) was shut down wrong way (power off, any other damage, kill, etc... ) it left opened the file(socket) /tmp/.s.PGSQL.5432 . It is found by Postmaster's next start with message:\n===\nStarting postgresql service: FATAL: StreamServerPort: bind() failed: errno=98\n Is another postmaster already running on that port?\n If not, remove socket node (/tmp/.s.PGSQL.<portnr>)and retry.\n/usr/bin/postmaster: cannot create UNIX stream port\n===\nso, You are in situation that Linux was completely started, all services are running ok except the postgres - and if You miss it, Your server can be running hours without poperly serving the database. \n\n= = = = = \n\nIf You find it usefull, make following modification to the file:\n\n\t/etc/rc.d/init.d/postgresql\n\non Your RedHat Linux (other Linux or Unix versions will probably need some changes in locations, filenames etc...)\nBegin of the changed lines is marked\n # [B] added by [email protected]\nEnd is marked\n # [E] added by [email protected]\n\nother text stuff was left for Your better orientation where put the changes...\n\nso here we are:\n======================================================\n#!/bin/sh\n...\n...\n[ -f /usr/bin/postmaster ] || exit 0\n\n# See how we were called.\ncase \"$1\" in\n start)\n # [B] added by [email protected]\n echo -n \"Checking status of last postgresql service shutdown: \"\n psql_socket=\"/tmp/.s.PGSQL.5432\"\n if [ -e $psql_socket -a \"`pidof postmaster`\" = \"\" ]; then\n rm -f $psql_socket\n echo \"incorrect\"\n else\n echo \"correct\"\n fi\n # [E] added by [email protected]\n\n echo -n \"Starting postgresql service: \"\n su postgres -c '/usr/bin/postmaster -S -D/var/lib/pgsql'\n sleep 1\n pid=`pidof postmaster`\n echo -n \"postmaster [$pid]\"\n touch /var/lock/subsys/postmaster\n echo\n ;;\n stop)\n echo -n \"Stopping postgresql service: \"\n killproc postmaster\n sleep 2\n rm -f /var/lock/subsys/postmaster\n echo\n ;;...\n...\n======================================================\n\n\n--\n\[email protected]\nDaniel Peder\nhttp://shop.culture.cz\n",
"msg_date": "Fri, 24 Sep 1999 11:45:32 +0200",
"msg_from": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres startup script modification (Linux RedHat)"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue [mailto:[email protected]]\n> Sent: Wednesday, September 22, 1999 7:12 PM\n> To: Tom Lane\n> Cc: pgsql-hackers\n> Subject: RE: [HACKERS] couldn't rollback cache ? \n> \n\nI thought about the way which neither calls HeapTupleSatis-\nfies() in SearchSysCache() nor invalidates syscache entry\nby OID.\n\nIn this case,we would need the information as follows.\n\n1. To_be_rollbacked info for the backend\n A list of being inserted system tuples.\n This list is held till end of transaction.\n In case of commit,this list is ignored and discarded.\n In case of rollback,tuples inserted after the specified\n savepoint are rollbacked and discarded. Syscache\n and relcache entries for the backend which correspond\n to the tuples are invalidated.\n\n2, To_be_invalidated info for the backend\n A list of being deleted system tuples.\n This list is discarded at every command.\n In case of rollback this list is ignored.\n Otherwise,syscache and relcache entries for the backend\n which correspond to the tuples in this list are invalidated\n before execution of each command. \n\n3. To_be_invalidated info for other backends\n A list of being deleted system tuples.\n This list is held till end of transaction.\n In case of commit,this list is sent to other backends and\n discarded.\n In case of rollback,tuples deleted after the specified savepoint\n are discarded.\n\n4. Immediate registrarion of to_be_invalidated relcache for all backends\n Currently SI messages aren't sent in case of elog(ERROR/FATAL).\n Seems the following commands have to register relcache invali-\n dation for all backends just when we call smgrunlink()/smgrtruncate(). \n\n DROP TABLE/INDEX\n TRUNCATE TABLE(implemented by Mike Mascari)\n VACUUM\n\nComments ?\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 24 Sep 1999 18:45:38 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] couldn't rollback cache ? "
}
] |
[
{
"msg_contents": "1) Is it just me or is psql the only application that uses libpq's\nPQprint()? I think both parties involved could benefit if the PQprint was\nmoved to or integrated into psql. Or perhaps a libpqprint as a compromise?\n\n2) Regarding TODO item \"Allow psql \\copy to allow delimiters\": What\nprecisely is the difference between:\n=> \\t\n=> \\o file\n=> select * from my_table;\nand\n=> \\copy my_table to file\nor, for that matter,\n=> copy my_table to 'file';\nbesides perhaps their internal execution path? The third variant already\nallows the use of delimiters (USING DELIMITERS '*'), and so does the first\none (\\f). (Speaking of which, does anyone know how to enter in effect \\f\n<TAB>?)\n\nCorrect me if I'm wrong, but I believe the use of PG{get|put}line() for\nthe \\copy would have to be scratched if one would want to use delimiters.\n\n3) Is anyone doing anything major on psql right now or would anyone mind\nif I undertake a major code clean up on it? Or is everyone completely\ncomfortable with 350-line functions with 7 levels of indentation?\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n\n",
"msg_date": "Fri, 24 Sep 1999 12:38:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql issues"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 1) Is it just me or is psql the only application that uses libpq's\n> PQprint()? I think both parties involved could benefit if the PQprint was\n> moved to or integrated into psql. Or perhaps a libpqprint as a compromise?\n\nThe print support in libpq is certainly ugly --- we've got two or three\ngenerations of print subroutines in there, and are maintaining 'em all\nbecause we have no idea what existing applications may depend on each.\nI'd be real hesitant to rip any of them out. However, if you can\nimprove on them, a new fourth-generation subroutine isn't going to\nhurt anyone ;-).\n\nI'm not sure whether moving them to a separate library would be worth\nthe trouble. It might be worth breaking up fe-print.c more, so that\na statically linked app will only pull in the subroutines it's actually\nusing. But for users of shared libraries this doesn't matter anyway.\n\n\n> 2) Regarding TODO item \"Allow psql \\copy to allow delimiters\": What\n> precisely is the difference between:\n\n> => \\copy my_table to file\n\n> => copy my_table to 'file';\n\nThose two are *very significantly* different: the former reads or writes\na file from psql, using the client's access rights (after transporting\nthe data across the frontend/backend channel, of course). The latter\nreads or writes a file from the backend, using the backend's access\nrights (and the psql process never even sees the data).\n\nThe two processes are not necessarily even on the same machine, so you\nmay be talking about two completely different filesystems. We restrict\nbackend copy to the Postgres superuser for obvious security reasons.\nTherefore, it'd be real nice if psql's \\copy support was more complete.\n\n> Correct me if I'm wrong, but I believe the use of PG{get|put}line() for\n> the \\copy would have to be scratched if one would want to use delimiters.\n\nNo. get/putline are just the implementation of the data transport step\nmentioned above. If psql would send a DELIMITER clause in the COPY TO\nSTDIN or COPY FROM STDOUT command that it sends to the backend to start\na \\copy operation, then the right things would happen. Should be a\npretty localized change. There might be some other COPY options that\nwould be worth supporting ... I forget.\n\nBTW, I suspect that there may be some bugs in get/putline and/or psql.c\nand/or the backend's copy.c that cause the data transport not to be\nperfectly 8-bit-clean. In particular, backslash quoting of control\ncharacters needs to be looked at. There is (or should be) a TODO item\nabout this. If you feel like digging into that area, it'd be useful.\n\n\n> 3) Is anyone doing anything major on psql right now or would anyone mind\n> if I undertake a major code clean up on it? Or is everyone completely\n> comfortable with 350-line functions with 7 levels of indentation?\n\nGo for it --- it's pretty ugly all right, and far from the fine example\nof how to code a Postgres client that it ought to be ;-).\n\nMake sure you start from current CVS sources, because I just finished\nhacking up psql (and libpq) to eliminate line length restrictions.\nOffhand I don't know of any other major changes pending in that code.\n(Anybody want to speak up here and say \"I'm doing something\"?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Sep 1999 11:29:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql issues "
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > 1) Is it just me or is psql the only application that uses libpq's\n> > PQprint()? I think both parties involved could benefit if the PQprint was\n> > moved to or integrated into psql. Or perhaps a libpqprint as a compromise?\n> \n> The print support in libpq is certainly ugly --- we've got two or three\n> generations of print subroutines in there, and are maintaining 'em all\n> because we have no idea what existing applications may depend on each.\n> I'd be real hesitant to rip any of them out. However, if you can\n> improve on them, a new fourth-generation subroutine isn't going to\n> hurt anyone ;-).\n\nLet me add something. I have no problem with #ifdef NOT_USED certain\nfunction bodies, and replacing them with something else like this:\n\n\n\tint libfunc()\n\t{\n\t#ifdef NOT_USED\n\t\told_lib_code\n\t\t...\n\t#else\n\t\tfprintf(stderr,\"This function is currently unsupported.\\n\");\n\t\tfprintf(stderr,\"If you want to use it, contact the bugs mailing list.\\n\");\n\t\texit(1);\n\t#endif\n\nand if we can get through one full release with the code like this, we\ncan remove the function entirely.\n\nThis seems to be the only clean way to remove much old cruft in library\ncode.\n\nI am sure some of the old code was for the old pgsql 'monitor' program\nthat we trashed early on, so I doubt people are using any of that print\ncode.\n\n> \n> I'm not sure whether moving them to a separate library would be worth\n> the trouble. It might be worth breaking up fe-print.c more, so that\n> a statically linked app will only pull in the subroutines it's actually\n> using. But for users of shared libraries this doesn't matter anyway.\n> \n\nI agree. Keep it in libpq because it may be useful for someone else.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:17:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql issues"
}
] |
[
{
"msg_contents": "> > > There is a parser bug someone introduced recently which we will fix\n> > > for v6.5.3, but I can give you a patch file for this on v6.5.2. I'll\n> > > develop it in the next couple of days.\n> > Is it a showstopper?? Send the patch anyway, of course.\n\n\"showstopper\" in the sense that it was not in v6.5.1? Apparently not.\nBut it causes a couple of math operators to not be recognized as\noperators in some situations (like when defining a new operator :/\n\nHere is a patch. *Not* tested under v6.5.1 or .2, but *all* of the\nchanges were tested under the current development tree. Since it\npatches gram.y, it will cause gram.c to be rebuilt, which we usually\ntry to avoid but only because not everyone has bison/flex installed.\nThat isn't the case for your RH system.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Fri, 24 Sep 1999 14:54:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Upgrade Procedure"
}
] |
[
{
"msg_contents": "This modification eleminates the case described below:\nIf PostgreSQL (probaly with Linux) was shut down wrong way (power off, any other damage, kill, etc... ) it left opened the file(socket) /tmp/.s.PGSQL.5432 . It is found by Postmaster's next start with message:\n===\nStarting postgresql service: FATAL: StreamServerPort: bind() failed: errno=98\n Is another postmaster already running on that port?\n If not, remove socket node (/tmp/.s.PGSQL.<portnr>)and retry.\n/usr/bin/postmaster: cannot create UNIX stream port\n===\nso, You are in situation that Linux was completely started, all services are running ok except the postgres - and if You miss it, Your server can be running hours without poperly serving the database. \n\n= = = = = \n\nIf You find it usefull, make following modification to the file:\n\n\t/etc/rc.d/init.d/postgresql\n\non Your RedHat Linux (other Linux or Unix versions will probably need some changes in locations, filenames etc...)\nBegin of the changed lines is marked\n # [B] added by [email protected]\nEnd is marked\n # [E] added by [email protected]\n\nother text stuff was left for Your better orientation where put the changes...\n\nso here we are:\n======================================================\n#!/bin/sh\n...\n...\n[ -f /usr/bin/postmaster ] || exit 0\n\n# See how we were called.\ncase \"$1\" in\n start)\n # [B] added by [email protected]\n echo -n \"Checking status of last postgresql service shutdown: \"\n psql_socket=\"/tmp/.s.PGSQL.5432\"\n if [ -e $psql_socket -a \"`pidof postmaster`\" = \"\" ]; then\n rm -f $psql_socket\n echo \"incorrect\"\n else\n echo \"correct\"\n fi\n # [E] added by [email protected]\n\n echo -n \"Starting postgresql service: \"\n su postgres -c '/usr/bin/postmaster -S -D/var/lib/pgsql'\n sleep 1\n pid=`pidof postmaster`\n echo -n \"postmaster [$pid]\"\n touch /var/lock/subsys/postmaster\n echo\n ;;\n stop)\n echo -n \"Stopping postgresql service: \"\n killproc postmaster\n sleep 2\n rm -f /var/lock/subsys/postmaster\n echo\n ;;...\n...\n======================================================\n\n\n--\n\[email protected]\nDaniel Peder\nhttp://shop.culture.cz\n",
"msg_date": "Fri, 24 Sep 1999 17:26:02 +0200",
"msg_from": "=?iso-8859-2?Q?Daniel_P=E9der?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres startup script modification (Linux RedHat)"
}
] |
[
{
"msg_contents": "OK, thanks to some probing by Hiroshi, and by the fact that it became\nutterly blatantly obvious, I can state as a fact that postgresql was NOT\nresponsible for the crashes I was seeing last night.\n\nI woke up this morning, intent on finding SOME solution, and I found this\n\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda3 1109780 704964 347461 67% /\n/dev/hda1 33149 6140 25297 20% /boot\n/dev/hdc1 9515145 3248272 5773207 36% /home\n/dev/hdb1 402852 154144 227903 40% /tmp\n/dev/sda1 30356106785018642307 43892061535609608 0 100% /var/lib/pgsql\n\nNow, I thought to myself, either my 9.2GB drive has become the size of a\nsmall country, or I have a problem.\n\nMuch probing has revealed to me that the nice adapted U2W scsi card that I\ninstalled, has problems under Linux SMP kernels.\n\nAs such, I wave the penguin of shame at Adaptec for shoddy drivers. And\nI declare that I once again find postgres warm and fuzzy and huggable {:-)\n\n ~Michael\n\nps. Before the problem became too obvious, Hiroshi sent me the following.\nIt may be useful, I have no idea what it does {:-))\n\n------------------------------------\n\nDisk full error while writing a block may be one of the cause.\nAs far as I see,error on block write isn't handled correctly in md.c.\n\nThe following patch may help you.\nHowever I didn't test the patch at all.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** storage/smgr/md.c.orig Fri Sep 24 15:01:29 1999\n--- storage/smgr/md.c Fri Sep 24 18:07:31 1999\n***************\n*** 243,248 ****\n--- 243,255 ----\n if ((pos = FileSeek(v->mdfd_vfd, 0L, SEEK_END)) < 0)\n return SM_FAIL;\n \n+ if (pos % BLCKSZ != 0)\n+ {\n+ pos = BLCKSZ * (pos / BLCKSZ);\n+ if (FileSeek(v->mdfd_vfd, pos, SEEK_SET) != pos)\n+ return SM_FAIL;\n+ }\n+ \n if (FileWrite(v->mdfd_vfd, buffer, BLCKSZ) != BLCKSZ)\n return SM_FAIL;\n \n***************\n*** 1060,1065 ****\n {\n long len;\n \n! len = FileSeek(file, 0L, SEEK_END) - 1;\n! return (BlockNumber) ((len < 0) ? 0 : 1 + len / blcksz);\n }\n--- 1067,1072 ----\n {\n long len;\n \n! len = FileSeek(file, 0L, SEEK_END);\n! return (BlockNumber) (len / blcksz);\n }\n",
"msg_date": "Fri, 24 Sep 1999 19:42:56 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Frustrated"
}
] |
[
{
"msg_contents": "Hi!\n\nHow do you profile backend? It complains about 'profile timer expired'\napparently due to waiting on socket. Maybe some compile option is \nmissing or there is other trick?\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Sat, 25 Sep 1999 20:14:45 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Profiling?"
}
] |
[
{
"msg_contents": "\nsomething appears to have messed up last night, but have been unable to\nfind cause...all mai lgoes to the archive, but leaves the system as blank\nemail? :(\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 25 Sep 1999 12:52:20 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "a test ..."
}
] |
[
{
"msg_contents": "As authorized by Tom L. himself and barring any loud protests I will start\na code clean-up on psql sometime within the next few days. So if you are\ndoing something on it or plan on doing so, please let me know. Among the\nthings on the agenda are:\n\n* Reduce function size, indentation levels, file sizes\n\n* Make use of latest libpq facilities\n\n* Take care of various TODO items, such as NULL display (perhaps the one\nrecently submitted by me?), \\copy issues, possibly more.\n\n* Allow for implementation of a more sophisticated readline TAB\ncompletion. (Not necessarily the one I recently sent in, if you can come\nup with a better one. I'll try to keep it general.)\n\n* Have tables vs. views show up correctly, as explained to me by Jan.\n\n* Various enhancements on HTML display.\n\n* A full bag of other ideas which will probably have to be postponed.\n\nI am also tempted to drop the use of libpq's PQprint so that it can be\nphased out, since I suspect hardly anyone else uses it and no one is\nreally happy with it. One could perhaps put a big #ifdef around that code\nthen. We'll see what happens. If this development would break your heart,\nplease yell.\n\nAny other suggestions before I lock myself into my room are welcome.\n\nPeter\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n\n",
"msg_date": "Sat, 25 Sep 1999 21:45:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql code to be obducted by alien (me)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I am also tempted to drop the use of libpq's PQprint so that it can be\n> phased out, since I suspect hardly anyone else uses it and no one is\n> really happy with it.\n\nIf you write new printing code for psql, please consider making it a\nseparate module that could be included into libpq so other applications\ncan use it.\n\nAnother part of psql that should be made as independent as possible\nis the support for \\copy. I recall a number of people asking in the\npast how they can read and write tables to files in their own apps.\nThere's not that much code involved, but psql is such a mess that it's\nhard to point to a chunk of code they can borrow.\n\nBTW, something closely related to \\copy that's languishing on the TODO\nlist is the ability to load the contents of a local file into a Large\nObject or write the data out again. This would be the equivalent of the\nserver-side operations lo_import and lo_export, but reading or writing a\nfile in psql's environment instead of the backend's. Basically a wrapper\naround lo_read/lo_write, not much to it but it needs done...\n\nAnyway, I guess the point of all this is that psql should be not only\na useful app in its own right, but a source of how-to examples and\nborrowable code for people making their own client apps. I think you\nwill move it a long way in that direction just by doing cleanup, but\nplease keep the notion in mind while you work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Sep 1999 18:57:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql code to be obducted by alien (me) "
},
{
"msg_contents": "\nHappy coding... :-)\n\n\n> As authorized by Tom L. himself and barring any loud protests I will start\n> a code clean-up on psql sometime within the next few days. So if you are\n> doing something on it or plan on doing so, please let me know. Among the\n> things on the agenda are:\n> \n> * Reduce function size, indentation levels, file sizes\n> \n> * Make use of latest libpq facilities\n> \n> * Take care of various TODO items, such as NULL display (perhaps the one\n> recently submitted by me?), \\copy issues, possibly more.\n> \n> * Allow for implementation of a more sophisticated readline TAB\n> completion. (Not necessarily the one I recently sent in, if you can come\n> up with a better one. I'll try to keep it general.)\n> \n> * Have tables vs. views show up correctly, as explained to me by Jan.\n> \n> * Various enhancements on HTML display.\n> \n> * A full bag of other ideas which will probably have to be postponed.\n> \n> I am also tempted to drop the use of libpq's PQprint so that it can be\n> phased out, since I suspect hardly anyone else uses it and no one is\n> really happy with it. One could perhaps put a big #ifdef around that code\n> then. We'll see what happens. If this development would break your heart,\n> please yell.\n> \n> Any other suggestions before I lock myself into my room are welcome.\n> \n> Peter\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:24:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql code to be obducted by alien (me)"
}
] |
[
{
"msg_contents": "Hi!\n\nHow do you profile backend? It complains about 'profile timer expired'\napparently due to waiting on socket. Maybe some compile option is \nmissing or there is other trick?\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n",
"msg_date": "Sun, 26 Sep 1999 01:43:14 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Profiling?"
},
{
"msg_contents": "Yes, I have done it many times. I profile that postgres process, not\nthe backend. Look for this in Makefile.global:\n\n\t# Comment out PROFILE to generate a profile version of the binaries\n\t#PROFILE= -p -non_shared\n\n\n> Hi!\n> \n> How do you profile backend? It complains about 'profile timer expired'\n> apparently due to waiting on socket. Maybe some compile option is \n> missing or there is other trick?\n> \n> -- \n> Leon.\n> -------\n> He knows he'll never have to answer for any of his theories actually \n> being put to test. If they were, they would be contaminated by reality.\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 26 Sep 1999 20:25:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Profiling?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Yes, I have done it many times. I profile that postgres process, not\n> the backend. \n\nHmm, isn't Postgres process called backend process? AFAIK postmaster\n(it's simply a nickname to Postgres) forks itself on receiveing new\nconnection request. Isn't it true? I mean that Postmaster is the same \nbinary as Postgres itself.\n\n> Look for this in Makefile.global:\n> \n> # Comment out PROFILE to generate a profile version of the binaries\n> #PROFILE= -p -non_shared\n> \n\nOf course I used -p option. The problem is, when I start Postmaster,\nit complains about 'profile timer expired'. What do I do wrong?\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 27 Sep 1999 12:54:02 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Profiling?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Yes, I have done it many times. I profile that postgres process, not\n> > the backend. \n> \n> Hmm, isn't Postgres process called backend process? AFAIK postmaster\n> (it's simply a nickname to Postgres) forks itself on receiveing new\n> connection request. Isn't it true? I mean that Postmaster is the same \n> binary as Postgres itself.\n\nYes.\n\n> \n> > Look for this in Makefile.global:\n> > \n> > # Comment out PROFILE to generate a profile version of the binaries\n> > #PROFILE= -p -non_shared\n> > \n> \n> Of course I used -p option. The problem is, when I start Postmaster,\n> it complains about 'profile timer expired'. What do I do wrong?\n\nThat's strange. I have not profiled in a while. I wonder if the\nremoval of the exec() has caused this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 11:33:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Profiling?]"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > >\n> > > # Comment out PROFILE to generate a profile version of the binaries\n> > > #PROFILE= -p -non_shared\n> > >\n> >\n> > Of course I used -p option. The problem is, when I start Postmaster,\n> > it complains about 'profile timer expired'. What do I do wrong?\n> \n> That's strange. I have not profiled in a while. I wonder if the\n> removal of the exec() has caused this.\n\nBTW, gcc on Linux doesn't understand the option -non_shared. Maybe\nthat is the cause? In general, who and where should I talk to on \nthat strange matter? Info and man seem not to say anything about it.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 27 Sep 1999 21:23:14 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Profiling?]"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Of course I used -p option. The problem is, when I start Postmaster,\n>> it complains about 'profile timer expired'. What do I do wrong?\n\n> That's strange. I have not profiled in a while. I wonder if the\n> removal of the exec() has caused this.\n\nI've done profiles successfully (with gcc -pg + gprof on HPUX) since\nthe exec change. I think Leon is running into some sort of platform-\nspecific profiler bug, but I dunno how to get around it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 1999 19:37:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Profiling?] "
}
] |
[
{
"msg_contents": "I am just about to commit Bernard Frankpitt's\nconstant-expression-reducing code, along with some of the improvements\nI suggested a couple days ago. In particular, it will not try to reduce\nany op/func not marked \"proiscachable\" in pg_proc. This renders it a\ntad ineffective with the current contents of pg_proc :-( ...\n\nThe only functions marked cachable in 6.5.2 are\n\nplay=> select proname from pg_proc where proiscachable;\nproname\n---------\nversion\nhashsel\nhashnpage\ngistsel\ngistnpage\n(5 rows)\n\nand to add insult to injury, I believe all five of these markings are\nwrong! Functions whose outputs can vary for the same inputs must not\nbe marked cacheable --- and all of these use data other than their\narguments.\n\nI have been working on modifying pg_proc.h to have believable\ncacheability information. What I did was to mark everything cacheable\nand then go through and unmark the stuff that shouldn't be\nconstant-foldable: basically, stuff that fetches data from tables,\nstuff that involves datetime conversion, and a few special cases like\nnextval() and oidrand().\n\nI am worried that I may have missed some things, and am seeking advice\non how I can check my work.\n\nOne thing I did not realize at first was that *none* of the datetime,\ndate, abstime, timespan, or tinterval operators can safely be marked\ncachable. The reason: these datatypes have special data values that\nmean \"now\" (this has nothing to do with what the conversion to/from\ntext form yields, BTW). Thus, for example, datetimeeq might say\none thing today and another tomorrow for the same input values,\nif one of them is \"now\" and the other is a constant time. Short of\ninserting knowledge about these special values into the constant-\nfolder, we have to mark all the operations on the datatype non-foldable.\n\nAre there any other gotchas like that one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Sep 1999 21:12:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Making proiscachable believable"
}
] |
[
{
"msg_contents": "Anyone else finding that the postgres CVS server isn't working?\nAnything I try to do yields the same failure:\n\n$ cvs log include/optimizer/clauses.h\ncan't create temporary directory\nPermission denied\n$\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 25 Sep 1999 21:23:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "cvs having problems?"
},
{
"msg_contents": "\nFixed, sorry for delay...we had a problem this mornig with the news\nspools on the system, and ended up disabling them while we look further\ninto it...cvs was pointing its temp directory at /news/tmp, which is the\nlargest \"empty\" file system we have, and while things are disabled, it no\nlonger exists...changed inetd.conf to point at /home/tmp instead, which is\nstill large...\n\n\n\nOn Sat, 25 Sep 1999, Tom Lane wrote:\n\n> Anyone else finding that the postgres CVS server isn't working?\n> Anything I try to do yields the same failure:\n> \n> $ cvs log include/optimizer/clauses.h\n> can't create temporary directory\n> Permission denied\n> $\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 25 Sep 1999 22:50:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvs having problems?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.