threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\nI've been poking around in the libpq area and I'm thinking of tackling\nthe streaming interface which was suggested recently.\n\nWhat I have in mind is that a new API PQexecStream() doesn't retrieve\nthe results. The tuples are then read back one by one with\nPQnextObject(). You can also use PQnextObject with regular PQexec, but\nin that case you lose the most of the benefit of streaming because it\nwould allocate memory for all the result. So the proposal is...\n\n\n/* like PQexec, but streams the results */\nPGresult *PQexecStream(PGconn *conn, const char *query)\n/* retrieve the next object from a PGresult */\nPGobject *PQnextObject(PGconn *conn)\n/* get value from an object/tuple */\u0018\u0013\nchar *PQgetObjectValue(const PGobject *res, int field_num)\n/* free tuple when done */\nvoid PQclearObject(PGobject *obj)\n\nOh yeah, can I fix the COPY protocol while I'm at it to conform more to\nthe other types of messages?\n\nBTW, what is this PQ thing? Does it stand for postquel? Are we ever\ngoing to dump that?\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Thu, 10 Feb 2000 23:24:32 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> What I have in mind is that a new API PQexecStream() doesn't retrieve\n> the results. The tuples are then read back one by one with\n> PQnextObject().\n\nOK, but how does this interact with asynchrononous retrieval? It\nshould be possible to run it in a nonblocking (select-waiting) mode.\n\n> /* like PQexec, but streams the results */\n> PGresult *PQexecStream(PGconn *conn, const char *query)\n> /* retrieve the next object from a PGresult */\n> PGobject *PQnextObject(PGconn *conn)\n> /* get value from an object/tuple */\u0018\u0013\n> char *PQgetObjectValue(const PGobject *res, int field_num)\n> /* free tuple when done */\n> void PQclearObject(PGobject *obj)\n\nThere are two other big gaps here, which is that you haven't specified\nhow you represent (a) errors and (b) end of query result. I assume you\nintend the initial PQexecStream call to wait for the first tuple to come\nback, so *most* sorts of errors will be reported at that point, but\nyou have to be able to cope with errors reported later on too.\n\nRather than inventing a new PGobject struct type, I'd suggest returning\nthe partial results as PGresults. This has a couple of benefits:\n * easy representation of an error encountered midway (you just return\n an error PGresult).\n * it's no big trick to \"batch\" retrieval, ie, return 10 or 100 tuples\n at a time, if that happens to prove useful.\n * each tuple batch could carry its own tuple description, which is\n something you will need if you want to go anywhere with that\n polymorphic-results idea.\n * end-of-query could be represented as a PGresult with zero tuples.\n (This would leave a null-pointer result open for use in the nonblock\n case, to indicate \"haven't got a response yet\".)\n * no need for an entire new set of API functions to query PGobjects.\n\nBTW, an earlier proposal for this same sort of thing didn't see it\nas an entirely new operating mode, but just a \"limit\" option added\nto a variant of PQexec: the limit says \"return no more than N tuples\nper PQresult\".\n\n> Oh yeah, can I fix the COPY protocol while I'm at it to conform more to\n> the other types of messages?\n\nI looked at that before, and while COPY is certainly ugly as sin, it's\nnot clear that it's worth creating cross-version compatibility problems\nto fix it. I'm inclined to leave it alone until such time as we\nundertake a really massive protocol change (moving to CORBA, say).\n\n> BTW, what is this PQ thing? Does it stand for postquel? Are we ever\n> going to dump that?\n\nYes, and no. We aren't going to break existing app code by indulging\nin cosmetic renaming of API names. Moreover we have to have *some*\nprefix to minimize the probability of global-symbol conflicts with apps\nand other libraries, so that one's as good as any.\n\nTo the extent that there is any system in the names in libpq (which I\nadmit ain't much), it's\n\tPQfoo --- exported public-API routine\n\tpqfoo --- internal routine not meant for apps to call, but must\n\t be global symbol because it is called cross-module\n\tPGfoo --- type name, enum const, etc\nI'd suggest sticking to those conventions in any new code you write.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2000 10:23:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> > What I have in mind is that a new API PQexecStream() doesn't retrieve\n> > the results. The tuples are then read back one by one with\n> > PQnextObject().\n> \n> OK, but how does this interact with asynchrononous retrieval? It\n> should be possible to run it in a nonblocking (select-waiting) mode.\n\nI didn't know that was a requirement. Well when doing this sort of \nstuff you never know what other sources of data they may want\nto wait for, so the only way is to have PQfileDescriptor or something,\nbut I don't think that affects these decisions does it? If they want\nasync, they are given the fd and select. When ready they call\nnexttuple.\n \n> BTW, an earlier proposal for this same sort of thing didn't see it\n> as an entirely new operating mode, but just a \"limit\" option added\n> to a variant of PQexec: the limit says \"return no more than N tuples\n> per PQresult\".\n\nAs in changing the interface to PQexec?\n\nI can't see the benefit of specifically asking for N tuples. Presumably\nbehind the scenes it will read from the socket in a respectably\nlarge chunk (8k for example). Beyond that I can't see any more reason \nfor customisation.\n\n> I looked at that before, and while COPY is certainly ugly as sin, it's\n> not clear that it's worth creating cross-version compatibility problems\n> to fix it. I'm inclined to leave it alone until such time as we\n> undertake a really massive protocol change (moving to CORBA, say).\n\nI'll look at that situation further later. Is there a policy on\nprotocol compatibility? If so, one way or both ways?\n\nThe other comments you made, I have to think about further.\n",
"msg_date": "Fri, 11 Feb 2000 15:57:20 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> OK, but how does this interact with asynchrononous retrieval? It\n>> should be possible to run it in a nonblocking (select-waiting) mode.\n\n> I didn't know that was a requirement.\n\nWell, there may not be anyone holding a gun to your head about it...\nbut there have been a number of people sweating to make the existing\nfacilities of libpq usable in a non-blocking fashion. Seems to me\nthat that sort of app would be particularly likely to want to make\nuse of a streaming API --- so if you don't think about it, there is\ngoing to be someone else coming along to clean up after you pretty\nsoon. Better to get it right the first time.\n\n> to wait for, so the only way is to have PQfileDescriptor or something,\n> but I don't think that affects these decisions does it? If they want\n> async, they are given the fd and select. When ready they call\n> nexttuple.\n\nNot really. The app can and does wait for select() to show read ready\non libpq's input socket --- but that only indicates that there is a TCP\npacket's worth of data available, *not* that a whole tuple is available.\nlibpq must provide the ability to consume data from the kernel (to\nclear the select-read-ready condition) and then either hand back a\ncompleted tuple (or several) or say \"sorry, no complete data yet\".\nI'd suggest understanding the existing facilities more carefully before\nyou set out to improve on them.\n\n>> to a variant of PQexec: the limit says \"return no more than N tuples\n>> per PQresult\".\n\n> As in changing the interface to PQexec?\n\nI did say \"variant\", no? We don't get to break existing callers of\nPQexec.\n\n> I can't see the benefit of specifically asking for N tuples. Presumably\n> behind the scenes it will read from the socket in a respectably\n> large chunk (8k for example). Beyond that I can't see any more reason \n> for customisation.\n\nWell, that's true from one point of view, but I think it's just libpq's\npoint of view. The application programmer is fairly likely to have\nspecific knowledge of the size of tuple he's fetching, and maybe even\nto have a global perspective that lets him decide he doesn't really\n*want* to deal with retrieved tuples on a packet-by-packet basis.\nMaybe waiting till he's got 100K of data is just right for his app.\n\nBut I can also believe that the app programmer doesn't want to commit to\na particular tuple size any more than libpq does. Do you have a better\nproposal for an API that doesn't commit any decisions about how many\ntuples to fetch at once?\n\n>> not clear that it's worth creating cross-version compatibility problems\n>> to fix it. I'm inclined to leave it alone until such time as we\n>> undertake a really massive protocol change (moving to CORBA, say).\n\n> I'll look at that situation further later. Is there a policy on\n> protocol compatibility? If so, one way or both ways?\n\nThe general policy so far has been that backends should be able to\ntalk to any vintage of frontend, but frontend clients need only be\nable to talk to backends of same or later version. (The idea is to\nbe able to upgrade your server without breaking existing clients,\nand then you can go around and update client apps at your\nconvenience.)\n\nThe last time we actually changed the protocol was in 6.4 (at my\ninstigation BTW) --- and while we didn't get a tidal wave of\n\"hey my new psql won't talk to an old server\" complaints, we got\na pretty fair number of 'em. So I'm very hesitant to break either\nforwards or backwards compatibility in new releases. I certainly\ndon't want to do it just for code beautification; we need a reason\nthat is compelling to the end users who will be inconvenienced.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 01:10:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Well, that's true from one point of view, but I think it's just libpq's\n> point of view. The application programmer is fairly likely to have\n> specific knowledge of the size of tuple he's fetching, and maybe even\n> to have a global perspective that lets him decide he doesn't really\n> *want* to deal with retrieved tuples on a packet-by-packet basis.\n> Maybe waiting till he's got 100K of data is just right for his app.\n> \n> But I can also believe that the app programmer doesn't want to commit to\n> a particular tuple size any more than libpq does. Do you have a better\n> proposal for an API that doesn't commit any decisions about how many\n> tuples to fetch at once?\n\nIf you think applications may like to keep buffered 100k of data, isn't\nthat an argument for the PGobject interface instead of the PGresult\ninterface?\n\nI'm trying to think of a situation where you want to buffer data. Let's\nsay psql has something like \"more\" inbuilt and it needs to buffer\na screenful, and go forward line by line. Now you want to keep the last\n40 tuples buffered. First up you want 40 tuples, then you want one\nat a time every time you press Enter.\n\nThis seems too much responsibility to press onto libpq, but if the user\nhas control over destruction of PQobjects they can buffer what they\nwant, how they want, when they want.\n",
"msg_date": "Fri, 11 Feb 2000 17:36:19 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> If you think applications may like to keep buffered 100k of data, isn't\n> that an argument for the PGobject interface instead of the PGresult\n> interface?\n\nHow so? I haven't actually figured out what you think PGobject will do\ndifferently from PGresult. Given the considerations I mentioned before,\nI think PGobject *is* a PGresult; it has to have all the same\nfunctionality, including carrying a tuple descriptor and a query\nstatus (+ error message if needed).\n\n> This seems too much responsibility to press onto libpq, but if the user\n> has control over destruction of PQobjects they can buffer what they\n> want, how they want, when they want.\n\nThe app has always had control over when to destroy PGresults, too.\nI still don't see the difference...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 10:10:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq "
},
{
"msg_contents": "Tom Lane wrote:\n> How so? I haven't actually figured out what you think PGobject \n> will do\n> differently from PGresult. Given the considerations I mentioned before,\n> I think PGobject *is* a PGresult; it has to have all the same\n> functionality, including carrying a tuple descriptor and a query\n> status (+ error message if needed).\n\nAll I mean to say is that it is often desirable to have control over\nwhen each individual object is destroyed, rather than having to destroy\neach batch at once. \n\nThe result status and query status is only temporarily interesting. Once\nI know the tuple arrived safely I don't care much about the state of\naffairs at that moment, and don't care to waste memory on a structure\nthat has space for all these error fields.\n\nFor example, if I want to buffer the last 20 tuples at all times I could\nhave..\nPGobject *cache[20]\nGetFirst() {\n for (int i = 0; i < 20; i++)\n cache[i] = getNextObject(...);\n}\n\nGetNext() {\n memmove(&cache[0], &cache[1], sizeof(PGobject *));\n cache[19] = getNextObject(...);\n}\n\nI don't see why the app programmer shouldn't have to write the loop\nGetFirst. Why should this be forced onto libpq when it doesn't help\nperformance or anything? I don't think, if I understand you correctly,\nthe PGresult idea doesn't give this flexibility. Correct me if I'm\nwrong.\n\nThe other thing about PGobject idea is that when I do a real OO database\nidea, is that getNextObject will optionally populate user-supplied data\ninstead. i.e. I can optionally pass a C++ object and a list of field\noffsets. So probably I would want getNextObject to take optional args of\na block of memory, and a structure describing field offsets. Only if\nthese are null does getNextObject allocate space for you.\n\n> > This seems too much responsibility to press onto libpq, but if the user\n> > has control over destruction of PQobjects they can buffer what they\n> > want, how they want, when they want.\n> \n> The app has always had control over when to destroy PGresults, too.\n> I still don't see the difference...\n> \n> regards, tom lane\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 13 Feb 2000 23:29:34 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] libpq"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> All I mean to say is that it is often desirable to have control over\n> when each individual object is destroyed, rather than having to destroy\n> each batch at once. \n\nRight, so if you really want to destroy retrieved tuples one at a time,\nyou request only one per retrieved PGresult. I claim that the other\ncase where you want them in small batches (but not necessarily only one\nat a time) is at least as interesting; therefore the mechanism should\nnot be limited to the exactly-one-at-a-time case. Once you allow for\nthe other requirements, you have something that looks enough like a\nPGresult that it might as well just *be* a PGresult.\n\n> The result status and query status is only temporarily interesting. Once\n> I know the tuple arrived safely I don't care much about the state of\n> affairs at that moment, and don't care to waste memory on a structure\n> that has space for all these error fields.\n\nLet's see (examines PGresult declaration). Four bytes for the\nresultStatus, four for the errMsg pointer, 40 for cmdStatus,\nout of a struct that is going to occupy close to 100 bytes on\ntypical hardware --- and that's not counting the tuple descriptor\ndata and the tuple(s) proper. You could easily reduce the cmdStatus\noverhead by making it a pointer to an allocated string instead of\nan in-line array, if the 40 bytes were really bothering you. So the\nabove seems a pretty weak argument for introducing a whole new datatype\nand a whole new set of access functions for it. Besides which, you\nhaven't explained how it is that you are going to avoid the need to\nbe able to represent error status in a PGObject. The function that\nfetches the next tuple(s) in a query has to be able to return an\nerror status, and that has to be distinguishable from \"successful\nend of query\" and from \"no more data available yet\".\n\n> The other thing about PGobject idea is that when I do a real OO database\n> idea, is that getNextObject will optionally populate user-supplied data\n> instead.\n\nAnd that can't be done from a PGresult because?\n\nSo far, the *only* valid reason you've given for inventing a new\ndatatype, rather than just using PGresult for the purpose, is to save a\nfew bytes by eliminating unnecessary fields. That seems a pretty weak\nargument (even assuming that the fields are unnecessary, which I doubt).\nHaving to support and document a whole set of essentially-identical\naccess functions for both PGresult and PGObject is the overhead that\nwe ought to be worried about, ISTM. Don't forget that it's not just\nlibpq we are talking about, either; this additional API will also have\nto propagate into libpq++, libpgtcl, the perl5 and python modules,\netc etc etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Feb 2000 12:43:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq "
},
{
"msg_contents": "\n100 bytes, or even 50 bytes seems like a huge price to pay. If I'm \nretrieving 10 byte tuples that's a 500% or 1000% overhead.\n\nThere are other issues too. Like if I want to be able to populate\na C++ object without the overhead of copying, I need to know\nin advance the type of tuple I'm getting back. So I need something \nlike a nextClass() API.\n\nHere is what I'm imagining (in very rough terms with details glossed\nover).\nHow would you do this with the PGresult idea?...\n\nclass Base {\n int c;\n}\nclass Sub1 : Base {\n int b;\n}\nclass Sub2 : Base {\n int c;\n}\n#define OFFSET (class, field) (&((class *)NULL)->field)\nstruct FieldPositions f1[] = { { \"a\", OFFSET(Sub1,a) }, { \"b\",\nOFFSET(Sub1,b)} };\nstruct FieldPositions f2[] = { { \"a\", OFFSET(Sub1, c) }, { \"c\",\nOFFSET(Sub2, c) } };\n\nPGresult *q = PQexecStream(\"SELECT ** from Base\");\nList<Base> results;\nfor (;;) {\n PGClass *class = PQnextClass(q);\n if (PQresultStatus(q) == ERROR)\n processError(q);\n else if (PQresultStatus(q) == NO_MORE)\n break;\n if (strcmp(class->name) == \"Sub1\") {\n results.add(PQnextObject(q, new Sub1, FieldPositions(f1)));\n else if (strcmp(class->name) == \"Sub2\") {\n results.add(PQnextObject(q, new Sub2, FieldPositions(f2)));\n }\n\nOf course in a full ODBMS front end, some of the above code would\nbe generated or something.\n\nIn this case PQnextObject is populating memory supplied by the\nprogrammer.\nThere is no overhead whatsoever, nor can there be because we are\nsupplying\nmemory for the fields we care about.\n\nIn this case we don't even need to store tuple descriptors because \nthe C++ object has it's vtbl which is enough. If we cared about\ntuple descriptors though we could hang onto the PGClass and do \nsomething like PQgetValue(class, object, \"fieldname\"), which\nwould be useful for some language interfaces no doubt.\n\nA basic C example would look like this...\n\nPGresult *q = PQexecStream(\"SELECT ** from Base\");\nfor (;;) {\n PGClass *class = PQnextClass(q);\n if (PQresultStatus(q) == ERROR)\n processError(q);\n else if (PQresultStatus(q) == NO_MORE)\n break;\n PGobject *obj = PQnextObject(q, NULL, NULL);\n for (int c = 0; c < PQnColumns(class); c++) {\n printf(\"%s: %s, \", PQcolumnName(class, c), PQcolumnValue(class, c,\nobj));\n printf(\"\\n\");\n }\n\nThe points to note here are:\n(1) Yes, the error message stuff comes from PGresult as it does now.\n(2) You don't have a wasteful new PGresult for every time you get\nthe next result.\n(3) You are certainly not required to store a whole lot of PGresults\njust because you want to cache tuples.\n(4) Because the tuple descriptor is explicit (PGClass*) you can\nkeep it or not as you please. If you are doing pure relational\nwith fixed number of columns, there is ZERO overhead per tuple\nbecause you only need keep one pointer to the PGClass. This is\neven though you retrieve results one at a time.\n(5) Because of (4) I can't see the need for any API to support\ngetting multiple tuples at a time since it is trivially implemented\nin terms of nextObject with no overhead.\n\nWhile a PGresult interface like you described could be built, I can't\nsee that\nit fulfills all the requirements that I would have. It could be\ntrivially\nbuilt on top of the above building blocks, but it doesn't sound fine\nenough\ngrained for me. If you disagree, tell me how you'd do it.\n\nTom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> > All I mean to say is that it is often desirable to have control over\n> > when each individual object is destroyed, rather than having to destroy\n> > each batch at once.\n> \n> Right, so if you really want to destroy retrieved tuples one at a time,\n> you request only one per retrieved PGresult. I claim that the other\n> case where you want them in small batches (but not necessarily only one\n> at a time) is at least as interesting; therefore the mechanism should\n> not be limited to the exactly-one-at-a-time case. Once you allow for\n> the other requirements, you have something that looks enough like a\n> PGresult that it might as well just *be* a PGresult.\n> \n> > The result status and query status is only temporarily interesting. Once\n> > I know the tuple arrived safely I don't care much about the state of\n> > affairs at that moment, and don't care to waste memory on a structure\n> > that has space for all these error fields.\n> \n> Let's see (examines PGresult declaration). Four bytes for the\n> resultStatus, four for the errMsg pointer, 40 for cmdStatus,\n> out of a struct that is going to occupy close to 100 bytes on\n> typical hardware --- and that's not counting the tuple descriptor\n> data and the tuple(s) proper. You could easily reduce the cmdStatus\n> overhead by making it a pointer to an allocated string instead of\n> an in-line array, if the 40 bytes were really bothering you. So the\n> above seems a pretty weak argument for introducing a whole new datatype\n> and a whole new set of access functions for it. Besides which, you\n> haven't explained how it is that you are going to avoid the need to\n> be able to represent error status in a PGObject. The function that\n> fetches the next tuple(s) in a query has to be able to return an\n> error status, and that has to be distinguishable from \"successful\n> end of query\" and from \"no more data available yet\".\n> \n> > The other thing about PGobject idea is that when I do a real OO database\n> > idea, is that getNextObject will optionally populate user-supplied data\n> > instead.\n> \n> And that can't be done from a PGresult because?\n> \n> So far, the *only* valid reason you've given for inventing a new\n> datatype, rather than just using PGresult for the purpose, is to save a\n> few bytes by eliminating unnecessary fields. That seems a pretty weak\n> argument (even assuming that the fields are unnecessary, which I doubt).\n> Having to support and document a whole set of essentially-identical\n> access functions for both PGresult and PGObject is the overhead that\n> we ought to be worried about, ISTM. Don't forget that it's not just\n> libpq we are talking about, either; this additional API will also have\n> to propagate into libpq++, libpgtcl, the perl5 and python modules,\n> etc etc etc.\n> \n> regards, tom lane\n",
"msg_date": "Mon, 14 Feb 2000 11:24:35 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq"
},
{
"msg_contents": "\nI posted this about a week ago, and it passed without comment.\nDoes this mean I'm so far off track that no-one cares to comment,\nor I got it so right that no comment was needed?\n\nQuick summary: I want to work on libpq, partly to implement\nmy OO plans in libpq, and partly to implement the streaming \ninterface. But I'm concerned that a lower-level interface\nwill give better control and better efficiency.\n\nAlso, this is a fair amount of hacking. I have heard talk of\n\"when we go to using corba\" and such. I could look at doing\nthis at the same time, but remain to be convinced of the benefit.\nWhat would be the method? something like sequence<Attribute> ?\nI would have thought this would be a big protocol overhead. I\nalso would have thought that the db protocol for a database\nwould be sufficiently simple and static that corba would be\noverkill. Am I wrong?\n\n\nChris Bitmead wrote:\n> \n> 100 bytes, or even 50 bytes seems like a huge price to pay. If I'm\n> retrieving 10 byte tuples that's a 500% or 1000% overhead.\n> \n> There are other issues too. Like if I want to be able to populate\n> a C++ object without the overhead of copying, I need to know\n> in advance the type of tuple I'm getting back. So I need something\n> like a nextClass() API.\n> \n> Here is what I'm imagining (in very rough terms with details glossed\n> over).\n> How would you do this with the PGresult idea?...\n> \n> class Base {\n> int c;\n> }\n> class Sub1 : Base {\n> int b;\n> }\n> class Sub2 : Base {\n> int c;\n> }\n> #define OFFSET (class, field) (&((class *)NULL)->field)\n> struct FieldPositions f1[] = { { \"a\", OFFSET(Sub1,a) }, { \"b\",\n> OFFSET(Sub1,b)} };\n> struct FieldPositions f2[] = { { \"a\", OFFSET(Sub1, c) }, { \"c\",\n> OFFSET(Sub2, c) } };\n> \n> PGresult *q = PQexecStream(\"SELECT ** from Base\");\n> List<Base> results;\n> for (;;) {\n> PGClass *class = PQnextClass(q);\n> if (PQresultStatus(q) == ERROR)\n> processError(q);\n> else if (PQresultStatus(q) == NO_MORE)\n> break;\n> if (strcmp(class->name) == \"Sub1\") {\n> results.add(PQnextObject(q, new Sub1, FieldPositions(f1)));\n> else if (strcmp(class->name) == \"Sub2\") {\n> results.add(PQnextObject(q, new Sub2, FieldPositions(f2)));\n> }\n> \n> Of course in a full ODBMS front end, some of the above code would\n> be generated or something.\n> \n> In this case PQnextObject is populating memory supplied by the\n> programmer.\n> There is no overhead whatsoever, nor can there be because we are\n> supplying\n> memory for the fields we care about.\n> \n> In this case we don't even need to store tuple descriptors because\n> the C++ object has it's vtbl which is enough. If we cared about\n> tuple descriptors though we could hang onto the PGClass and do\n> something like PQgetValue(class, object, \"fieldname\"), which\n> would be useful for some language interfaces no doubt.\n> \n> A basic C example would look like this...\n> \n> PGresult *q = PQexecStream(\"SELECT ** from Base\");\n> for (;;) {\n> PGClass *class = PQnextClass(q);\n> if (PQresultStatus(q) == ERROR)\n> processError(q);\n> else if (PQresultStatus(q) == NO_MORE)\n> break;\n> PGobject *obj = PQnextObject(q, NULL, NULL);\n> for (int c = 0; c < PQnColumns(class); c++) {\n> printf(\"%s: %s, \", PQcolumnName(class, c), PQcolumnValue(class, c,\n> obj));\n> printf(\"\\n\");\n> }\n> \n> The points to note here are:\n> (1) Yes, the error message stuff comes from PGresult as it does now.\n> (2) You don't have a wasteful new PGresult for every time you get\n> the next result.\n> (3) You are certainly not required to store a whole lot of PGresults\n> just because you want to cache tuples.\n> (4) Because the tuple descriptor is explicit (PGClass*) you can\n> keep it or not as you please. If you are doing pure relational\n> with fixed number of columns, there is ZERO overhead per tuple\n> because you only need keep one pointer to the PGClass. This is\n> even though you retrieve results one at a time.\n> (5) Because of (4) I can't see the need for any API to support\n> getting multiple tuples at a time since it is trivially implemented\n> in terms of nextObject with no overhead.\n> \n> While a PGresult interface like you described could be built, I can't\n> see that\n> it fulfills all the requirements that I would have. It could be\n> trivially\n> built on top of the above building blocks, but it doesn't sound fine\n> enough\n> grained for me. If you disagree, tell me how you'd do it.\n> \n> Tom Lane wrote:\n> >\n> > Chris <[email protected]> writes:\n> > > All I mean to say is that it is often desirable to have control over\n> > > when each individual object is destroyed, rather than having to destroy\n> > > each batch at once.\n> >\n> > Right, so if you really want to destroy retrieved tuples one at a time,\n> > you request only one per retrieved PGresult. I claim that the other\n> > case where you want them in small batches (but not necessarily only one\n> > at a time) is at least as interesting; therefore the mechanism should\n> > not be limited to the exactly-one-at-a-time case. Once you allow for\n> > the other requirements, you have something that looks enough like a\n> > PGresult that it might as well just *be* a PGresult.\n> >\n> > > The result status and query status is only temporarily interesting. Once\n> > > I know the tuple arrived safely I don't care much about the state of\n> > > affairs at that moment, and don't care to waste memory on a structure\n> > > that has space for all these error fields.\n> >\n> > Let's see (examines PGresult declaration). Four bytes for the\n> > resultStatus, four for the errMsg pointer, 40 for cmdStatus,\n> > out of a struct that is going to occupy close to 100 bytes on\n> > typical hardware --- and that's not counting the tuple descriptor\n> > data and the tuple(s) proper. You could easily reduce the cmdStatus\n> > overhead by making it a pointer to an allocated string instead of\n> > an in-line array, if the 40 bytes were really bothering you. So the\n> > above seems a pretty weak argument for introducing a whole new datatype\n> > and a whole new set of access functions for it. Besides which, you\n> > haven't explained how it is that you are going to avoid the need to\n> > be able to represent error status in a PGObject. The function that\n> > fetches the next tuple(s) in a query has to be able to return an\n> > error status, and that has to be distinguishable from \"successful\n> > end of query\" and from \"no more data available yet\".\n> >\n> > > The other thing about PGobject idea is that when I do a real OO database\n> > > idea, is that getNextObject will optionally populate user-supplied data\n> > > instead.\n> >\n> > And that can't be done from a PGresult because?\n> >\n> > So far, the *only* valid reason you've given for inventing a new\n> > datatype, rather than just using PGresult for the purpose, is to save a\n> > few bytes by eliminating unnecessary fields. That seems a pretty weak\n> > argument (even assuming that the fields are unnecessary, which I doubt).\n> > Having to support and document a whole set of essentially-identical\n> > access functions for both PGresult and PGObject is the overhead that\n> > we ought to be worried about, ISTM. Don't forget that it's not just\n> > libpq we are talking about, either; this additional API will also have\n> > to propagate into libpq++, libpgtcl, the perl5 and python modules,\n> > etc etc etc.\n> >\n> > regards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 17:28:13 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> I posted this about a week ago, and it passed without comment.\n> Does this mean I'm so far off track that no-one cares to comment,\n> or I got it so right that no comment was needed?\n\nI haven't looked at it because I am trying to finish up other stuff\nbefore we go beta. Will get back to you later. I imagine other\npeople are in deadline mode also...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 02:01:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq "
},
{
"msg_contents": "Tom Lane wrote:\n\n> I haven't looked at it because I am trying to finish up other stuff\n> before we go beta. Will get back to you later. I imagine other\n> people are in deadline mode also...\n\nOk, sure.\n",
"msg_date": "Fri, 18 Feb 2000 10:23:35 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] libpq"
}
] |
[
{
"msg_contents": "\n> Is'nt the \"blank portal\" the name of the cursor you get when you just \n> do a select without creating a cursor ?\n\nYes, is that still so ?\n\n> \n> > I don't really see any advantage, that psql does not do a fetch loop\n> > with a portal.\n> \n> It only increases traffic, as explicit fetch commands need to be sent \n> to backend. If one does not declare a cursor, an implicit \n> fetch all from \n> blank is performed.\n\nI don't really see how a fetch every x rows (e.g.1000) would add significant\noverhead.\nThe first fetch could still be done implicit, it would only fetch 1000\ninstead of fetch all.\nThus there would only be overhead for large result sets, where the\nwasted memory is of real concern.\n\nAndreas\n",
"msg_date": "Thu, 10 Feb 2000 13:35:22 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] Another nasty cache problem"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Is'nt the \"blank portal\" the name of the cursor you get when you just\n> > do a select without creating a cursor ?\n> \n> Yes, is that still so ?\n> \n> >\n> > > I don't really see any advantage, that psql does not do a fetch loop\n> > > with a portal.\n> >\n> > It only increases traffic, as explicit fetch commands need to be sent\n> > to backend. If one does not declare a cursor, an implicit\n> > fetch all from\n> > blank is performed.\n> \n> I don't really see how a fetch every x rows (e.g.1000) would add significant\n> overhead.\n> The first fetch could still be done implicit, it would only fetch 1000\n> instead of fetch all.\n> Thus there would only be overhead for large result sets, where the\n> wasted memory is of real concern.\n\nApart from anything else, it would make psql inconvenient for debugging \nthe regular, non-cursor mechanism if psql went off and always used a\ncursor regardless.\n\nAnd since we know that cursors are not the best way to fix this problem\nin\npsql (streaming is the answer), then it doesn't seem a good plan.\n",
"msg_date": "Fri, 11 Feb 2000 10:15:28 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] Another nasty cache problem"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Is'nt the \"blank portal\" the name of the cursor you get when you just\n> > do a select without creating a cursor ?\n> \n> Yes, is that still so ?\n\n>From my toy implementation of fe-be protocol in python for v.6.2 I \nremember it to be, i.e. the cursors name is blank if \ndeclare cursor ;fetch all ...\nis implicit\n\n> > > I don't really see any advantage, that psql does not do a fetch loop\n> > > with a portal.\n> >\n> > It only increases traffic, as explicit fetch commands need to be sent\n> > to backend. If one does not declare a cursor, an implicit\n> > fetch all from\n> > blank is performed.\n> \n> I don't really see how a fetch every x rows (e.g.1000) would add significant\n> overhead.\n\nBut it would start a transaction and possibly lock the table as well.\n\n> The first fetch could still be done implicit, it would only fetch 1000\n> instead of fetch all.\n\nmaybe we should add a macro language to psql and thus make it into something \nelse, like pgsh ;)\n\n> Thus there would only be overhead for large result sets, where the\n> wasted memory is of real concern.\n\nThe whole fe-be protocol should be re-thought at some stage (or an additional \nprotocol + client libs added) anyway, as current one is quite weak at XOPEN\nCLI \nsupport both ODBC and JDBC drivers are full of hacks to be compatible with \nstandard usages. Also performance suffers on inserts adn selects as prepared \nqueries can't be currently used from client programs (they can from SPI).\n\n\n\n-------------------\nHannu\n",
"msg_date": "Fri, 11 Feb 2000 23:53:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] Another nasty cache problem"
}
] |
[
{
"msg_contents": " \nHi\n \nI'm running postgres v6.5.3. I need to make calls to\nthe functions in libpq in my code. For this I need the\nfiles - libpq.lib/libpq.lib.dll/libpqdll.lib.\n\nWhen I run 'nmake /f win32.mak' in the src directory,\nit is unable to open/find config.h . If I use the\nconfig.h generated as a result of 'configure' on\ncygwin, it complains about other .h files not being\nfound. (I do not know if there is a way to do the\nequivalent on the DOS Shell/Command Prompt )\n\nCould anyone let me know how to build libpq to get\nlibpq.dll/libpq.lib/libpqdll.lib ? If somebody already\nhas a version of the same for postgres v6.5.3, even\nthat would be helpful.\n\nThanks,\nRini\n\nps : The administrators guide has a chapter on this\nwhich I followed. (But it mentions Postgres v6.4\n?!)\nHere is an extract :\n \n> Chapter 20. Installation on Win32\n> \n> Table of Contents\n> Building the libraries\n> Installing the libraries\n> Using the libraries\n> \n> Build and installation instructions for\n> Postgres v6.4 client libraries on Win32.\n> \n> Building the libraries\n> \n> The makefiles included in Postgres are written for\n> Microsoft Visual C++, and will probably not work\n> with\n> other systems. It should be\n> possible to compile the libaries manually in other\n> cases.\n> \n> To build the libraries, change directory into the\n> src\n> directory, and type the command \n> \n> nmake /f win32.mak\n> \n> This assumes that you have Visual C++ in your path.\n> \n> The following files will be built: \n> \n> interfaces\\libpq\\Release\\libpq.dll - The\n> dynamically linkable frontend library\n> \n> interfaces\\libpq\\Release\\libpqdll.lib -\n> Import\n> library to link your program to libpq.dll\n> \n> interfaces\\libpq\\Release\\libpq.lib - Static\n> library version of the frontend library\n> \n> bin\\psql\\Release\\psql.exe - The Postgresql\n> interactive SQL monitor\n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Talk to your friends online with Yahoo! Messenger.\n> http://im.yahoo.com\n> \n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 05:40:29 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to make libpq on winnt using the\n 'win32.mak's"
}
] |
[
{
"msg_contents": " \nHi\n \nI'm running postgres v6.5.3. I need to make calls to\nthe functions in libpq in my code. For this I need\nthe\nfiles - libpq.lib/libpq.lib.dll/libpqdll.lib.\n\nWhen I run 'nmake /f win32.mak' in the src\ndirectory,\nit is unable to open/find config.h . If I use the\nconfig.h generated as a result of 'configure' on\ncygwin, it complains about other .h files not being\nfound. (I do not know if there is a way to do the\nequivalent on the DOS Shell/Command Prompt )\n\nCould anyone let me know how to build libpq to get\nlibpq.dll/libpq.lib/libpqdll.lib ? If somebody\nalready\nhas a version of the same for postgres v6.5.3, even\nthat would be helpful.\n\nThanks,\nRini\n\nps : The administrators guide has a chapter on this\nwhich I followed. (But it mentions Postgres v6.4\n?!)\nHere is an extract :\n \n> Chapter 20. Installation on Win32\n> \n> Table of Contents\n> Building the libraries\n> Installing the libraries\n> Using the libraries\n> \n> Build and installation instructions for\n> Postgres v6.4 client libraries on Win32.\n> \n> Building the libraries\n> \n> The makefiles included in Postgres are written\nfor\n> Microsoft Visual C++, and will probably not work\n> with\n> other systems. It should be\n> possible to compile the libaries manually in\nother\n> cases.\n> \n> To build the libraries, change directory into the\n> src\n> directory, and type the command \n> \n> nmake /f win32.mak\n> \n> This assumes that you have Visual C++ in your\npath.\n> \n> The following files will be built: \n> \n> interfaces\\libpq\\Release\\libpq.dll - The\n> dynamically linkable frontend library\n> \n> interfaces\\libpq\\Release\\libpqdll.lib -\n> Import\n> library to link your program to libpq.dll\n> \n> interfaces\\libpq\\Release\\libpq.lib -\nStatic\n> library version of the frontend library\n> \n> bin\\psql\\Release\\psql.exe - The Postgresql\n> interactive SQL monitor\n> \n> \n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 05:47:48 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": " \nHi\n \nI'm running postgres v6.5.3. I need to make calls to\nthe functions in libpq in my code. For this I need the\nfiles - libpq.lib/libpq.lib.dll/libpqdll.lib.\n\nWhen I run 'nmake /f win32.mak' in the src directory,\nit is unable to open/find config.h . If I use the\nconfig.h generated as a result of 'configure' on\ncygwin, it complains about other .h files not being\nfound. (I do not know if there is a way to do the\nequivalent on the DOS Shell/Command Prompt )\n\nCould anyone let me know how to build libpq to get\nlibpq.dll/libpq.lib/libpqdll.lib ? If somebody already\nhas a version of the same for postgres v6.5.3, even\nthat would be helpful.\n\nThanks,\nRini\n\nps : The administrators guide has a chapter on this\nwhich I followed. (But it mentions Postgres v6.4\n?!)\nHere is an extract :\n \n> Chapter 20. Installation on Win32\n> \n> Table of Contents\n> Building the libraries\n> Installing the libraries\n> Using the libraries\n> \n Build and installation instructions for\n> Postgres v6.4 client libraries on Win32.\n> \n> Building the libraries\n> \n> The makefiles included in Postgres are written\nfor\n> Microsoft Visual C++, and will probably not work\n> with\n> other systems. It should be\n> possible to compile the libaries manually in\nother\n> cases.\n> \n> To build the libraries, change directory into the\n> src\n> directory, and type the command \n> \n> nmake /f win32.mak\n> \n> This assumes that you have Visual C++ in your\npath.\n> \n> The following files will be built: \n> \n> interfaces\\libpq\\Release\\libpq.dll - The\n> dynamically linkable frontend library\n> \n> interfaces\\libpq\\Release\\libpqdll.lib -\n> Import\n> library to link your program to libpq.dll\n> \n> interfaces\\libpq\\Release\\libpq.lib -\nStatic\n> library version of the frontend library\n> \n> bin\\psql\\Release\\psql.exe - The Postgresql\n> interactive SQL monitor\n> \n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 06:16:50 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to make libpq on winnt using the 'win32.mak's"
},
{
"msg_contents": ">\n> Hi\n>\n> I'm running postgres v6.5.3. I need to make calls to\n> the functions in libpq in my code. For this I need the\n> files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n\n You find prepared .dll's under\n\n src/bin/pgaccess/win32/dll\n\n Wasn't there some utility to generate .lib files from .dll's?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 10 Feb 2000 15:24:29 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "You will need to copy \"config.h.win32\" to \"config.h\" in the include\ndirectory.\n\nI think this patch to the docs should be what is needed.\n\n*** install-win32.sgml.orig Thu Feb 10 16:21:25 2000\n--- install-win32.sgml Thu Feb 10 16:22:49 2000\n***************\n*** 20,27 ****\n\n <Para>\n To build the libraries, change directory into the <filename>src</filename>\n! directory, and type the command\n <programlisting>\n nmake /f win32.mak\n </programlisting>\n This assumes that you have <ProductName>Visual C++</ProductName> in your\n--- 20,28 ----\n\n <Para>\n To build the libraries, change directory into the <filename>src</filename>\n! directory, and type the commands\n <programlisting>\n+ copy include\\config.h.win32 include\\config.h\n nmake /f win32.mak\n </programlisting>\n This assumes that you have <ProductName>Visual C++</ProductName> in your\n\n\n\n\nHmm. I just realised that that is for the current version, not 6.5.3.\nHowever, you will need something like it - I'm afraid I don't remember\nexactly what. Try either with the config.h.win32 from -current, or simply\ntry with an empty config.h.\n\n//Magnus\n\n> -----Original Message-----\n> From: Rini Dutta [mailto:[email protected]]\n> Sent: den 10 februari 2000 15:17\n> To: [email protected]\n> Subject: [HACKERS] how to make libpq on winnt using the 'win32.mak's\n> \n> \n> \n> Hi\n> \n> I'm running postgres v6.5.3. I need to make calls to\n> the functions in libpq in my code. For this I need the\n> files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n> \n> When I run 'nmake /f win32.mak' in the src directory,\n> it is unable to open/find config.h . If I use the\n> config.h generated as a result of 'configure' on\n> cygwin, it complains about other .h files not being\n> found. (I do not know if there is a way to do the\n> equivalent on the DOS Shell/Command Prompt )\n> \n> Could anyone let me know how to build libpq to get\n> libpq.dll/libpq.lib/libpqdll.lib ? If somebody already\n> has a version of the same for postgres v6.5.3, even\n> that would be helpful.\n> \n> Thanks,\n> Rini\n> \n> ps : The administrators guide has a chapter on this\n> which I followed. (But it mentions Postgres v6.4\n> ?!)\n> Here is an extract :\n> \n> > Chapter 20. Installation on Win32\n> > \n> > Table of Contents\n> > Building the libraries\n> > Installing the libraries\n> > Using the libraries\n> > \n> Build and installation instructions for\n> > Postgres v6.4 client libraries on Win32.\n> > \n> > Building the libraries\n> > \n> > The makefiles included in Postgres are written\n> for\n> > Microsoft Visual C++, and will probably not work\n> > with\n> > other systems. It should be\n> > possible to compile the libaries manually in\n> other\n> > cases.\n> > \n> > To build the libraries, change directory into the\n> > src\n> > directory, and type the command \n> > \n> > nmake /f win32.mak\n> > \n> > This assumes that you have Visual C++ in your\n> path.\n> > \n> > The following files will be built: \n> > \n> > interfaces\\libpq\\Release\\libpq.dll - The\n> > dynamically linkable frontend library\n> > \n> > interfaces\\libpq\\Release\\libpqdll.lib -\n> > Import\n> > library to link your program to libpq.dll\n> > \n> > interfaces\\libpq\\Release\\libpq.lib -\n> Static\n> > library version of the frontend library\n> > \n> > bin\\psql\\Release\\psql.exe - The Postgresql\n> > interactive SQL monitor\n> > \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Talk to your friends online with Yahoo! Messenger.\n> http://im.yahoo.com\n> \n> ************\n> \n",
"msg_date": "Thu, 10 Feb 2000 16:25:56 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
},
{
"msg_contents": "Applied.\n\n[Charset windows-1252 unsupported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 10:45:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": " \n> You find prepared .dll's under\n> \n> src/bin/pgaccess/win32/dll\n> \n> Wasn't there some utility to generate .lib files\n> from .dll's?\n> \n> \n> Jan\nI just checked. There is a dll - libpq.dll-6.5.1 (not\n6.5.3 - the version of postgres I am using)\n\nI still need the corresponding .lib file . A utility\nwhich generates it would solve this problem, provided\nthe dll is compatible with postgresql v6.5.3, but I'm\nnot aware of such a utility.\n\nThasks,\nRini\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 07:26:31 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "> > Hi\n> >\n> > I'm running postgres v6.5.3. I need to make calls to\n> > the functions in libpq in my code. For this I need the\n> > files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n> \n> You find prepared .dll's under\n> \n> src/bin/pgaccess/win32/dll\n> \n> Wasn't there some utility to generate .lib files from .dll's?\n\nI think you can just do:\n\nLIB /DEF:libpqdll.def\n\nHaven't tested it, though.\n\n(If you don't have the def file, you can do \"dumpbin /exports <file>\" to get\na listing of them.)\n\n//Magnus\n",
"msg_date": "Thu, 10 Feb 2000 16:32:40 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Magnus Hagander\n> \n> > > Hi\n> > >\n> > > I'm running postgres v6.5.3. I need to make calls to\n> > > the functions in libpq in my code. For this I need the\n> > > files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n> > \n> > You find prepared .dll's under\n> > \n> > src/bin/pgaccess/win32/dll\n> > \n> > Wasn't there some utility to generate .lib files from .dll's?\n>\n\nI've made the dll's(libpq.dll libpgtcl.dll) under pgaccess/win32\nby Constantin's request. Could someone make them instead\nfrom now ? I've used them little myself and would lose VC++\nenvironmemt in the near future. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 11 Feb 2000 16:35:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Magnus Hagander\n> >\n> > > > Hi\n> > > >\n> > > > I'm running postgres v6.5.3. I need to make calls to\n> > > > the functions in libpq in my code. For this I need the\n> > > > files - libpq.lib/libpq.lib.dll/libpqdll.lib.\n> > >\n> > > You find prepared .dll's under\n> > >\n> > > src/bin/pgaccess/win32/dll\n> > >\n> > > Wasn't there some utility to generate .lib files from .dll's?\n> >\n> \n> I've made the dll's(libpq.dll libpgtcl.dll) under pgaccess/win32\n> by Constantin's request. Could someone make them instead\n> from now ? I've used them little myself and would lose VC++\n> environmemt in the near future.\n\nShould'nt we use MingW32 instead of VC++ ?\n\n----------\nHannu\n",
"msg_date": "Fri, 11 Feb 2000 15:13:32 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "createdb has lost its ability to supply a default database name, and\nnow *requires* an argument to run. Did we make this change because it\nwas difficult or impossible to get the default argument on some of our\nsupported platforms? Or did we make the change because it is \"more\ncorrect\" or something?\n\nI'm finding it annoying to retrain my fingers to type more stuff to\nget the same functionality as before ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 10 Feb 2000 15:42:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "createdb default arguments"
},
{
"msg_contents": "On Thu, 10 Feb 2000, Thomas Lockhart wrote:\n\n> createdb has lost its ability to supply a default database name, and\n> now *requires* an argument to run. Did we make this change because it\n> was difficult or impossible to get the default argument on some of our\n> supported platforms? Or did we make the change because it is \"more\n> correct\" or something?\n\nWhat was supposed to be the default argument?\n\n> \n> I'm finding it annoying to retrain my fingers to type more stuff to\n> get the same functionality as before ;)\n\nCan be fixed. Probably something that happened during the massive rewrite\nphase.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 17:08:14 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "> On Thu, 10 Feb 2000, Thomas Lockhart wrote:\n> \n> > createdb has lost its ability to supply a default database name, and\n> > now *requires* an argument to run. Did we make this change because it\n> > was difficult or impossible to get the default argument on some of our\n> > supported platforms? Or did we make the change because it is \"more\n> > correct\" or something?\n> \n> What was supposed to be the default argument?\n\nPeter reminder that perl does not compile, I believe because of pqbool.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 11:19:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "On Thu, 10 Feb 2000, Thomas Lockhart wrote:\n\n> > > createdb has lost its ability to supply a default database name...\n> > What was supposed to be the default argument?\n> \n> Ah! Same as for psql: the account name on the process running it (ie\n> the user's name).\n\nWill be done.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 17:27:13 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "> > createdb has lost its ability to supply a default database name...\n> What was supposed to be the default argument?\n\nAh! Same as for psql: the account name on the process running it (ie\nthe user's name).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 10 Feb 2000 16:30:45 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > What was supposed to be the default argument?\n \n> Peter reminder that perl does not compile, I believe because of pqbool.\n\nHe's fixed that, at least in the version I was building the other night\nwhen I was griping about the man pages.....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 10 Feb 2000 12:51:35 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "On 2000-02-10, Thomas Lockhart mentioned:\n\n> > > createdb has lost its ability to supply a default database name...\n> > What was supposed to be the default argument?\n> \n> Ah! Same as for psql: the account name on the process running it (ie\n> the user's name).\n\nThis is fixed now, but I don't suppose you want dropdb's default behaviour\nto be along those same lines. I'd have a serious problem with that, even\nthough old destroydb used to do that.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 21:06:06 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] createdb default arguments"
},
{
"msg_contents": "> This is fixed now, but I don't suppose you want dropdb's default behaviour\n> to be along those same lines. I'd have a serious problem with that, even\n> though old destroydb used to do that.\n\nHmm, I think I see a correct answer in the way you phrased the\nquestion :)\n\nYou are right, the downside to a default argument for dropdb would\nargue strongly for supplying *no* default argument. For \"create\" kinds\nof things, the downside is minimal, and the convenience is high.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 10 Feb 2000 21:51:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] createdb default arguments"
}
] |
[
{
"msg_contents": "We have discussed in the past the need for the optimizer to take LIMIT\ninto account when choosing plans. Currently, since planning is done\non the basis of total plan cost for retrieving all tuples, there's\nno way to prefer a \"fast start\" plan over \"slow start\". But that's\nwhat you want if there's a small LIMIT.\n\nUp to now I haven't seen a practical way to fix this, because the\noptimizer does its planning bottom-up (start with scan plans, then\nmake joins, etc) and there's no easy way to know at the bottom of\nthe pyramid whether you can expect to take advantage of a LIMIT that\nexists at the top. For example, if there's a SORT or GROUP step\nin between, you can't apply the LIMIT to the bottom level; but the\nbottom guys don't know whether there will be such a step.\n\nI have thought of a fairly clean way to attack this problem, which\nis to represent the cost of a plan in two parts instead of only one.\nInstead of just \"cost\", have \"startup cost\" and \"cost per tuple\".\n(Actually, it'd probably be easier to work with \"startup cost\" and\n\"total cost if all tuples are retrieved\", but that's a trivial\nrepresentational detail; the point is that our cost model will now be\nof the form a*N+b when N tuples are retrieved.) It'd be pretty easy\nto produce plausible numbers for all the plan types we use. Then,\nthe plan comparators would keep any plan that wins on either startup\nor total cost, rather than only considering total cost. Finally\nat the top level of planning, when there is a LIMIT the preferred\nplan would be selected by comparing a*LIMIT+b rather than total cost.\n\nI think I can get this done before beta, but before I go into hack-\nattack mode I wanted to run it up the flagpole and see if anyone\nhas any complaints or better ideas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2000 11:48:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solution for LIMIT cost estimation"
},
{
"msg_contents": "Tom - \nThis would fit in really well with some ideas I've been working on\nwith remote table access. Mariposa makes the design decision that\nanything might be remote, and the optimizer doesn't get to know what\nis and what isn't, so they run the single-site optimization, basically\nassuming everything is local, then fragment the plan and ship parts off\nfor remote execution.\n\nI've been thinking about ways to let the optimizer know about remote\ntables, and the capabilites of the server serving the remote table.\nI won't fill in the details here. I'm writing up a proposal for how\nthis will all work: I've got a telecon today to gauge administrative\nsupport for developing this during work hours, which will _dramatically_\nspeed up it's implementation ;-)\n\nIt does seem to me, however, that the cost of going remote is best\ndescribed with the a*N+b model you're describing here: for remote tables,\nb will be large, and dominate, unless you're pulling a _lot_ of tuples.\n\nSuffice to say, sounds great to me.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\nOn Thu, Feb 10, 2000 at 11:48:15AM -0500, Tom Lane wrote:\n> We have discussed in the past the need for the optimizer to take LIMIT\n> into account when choosing plans. Currently, since planning is done\n> on the basis of total plan cost for retrieving all tuples, there's\n> no way to prefer a \"fast start\" plan over \"slow start\". But that's\n> what you want if there's a small LIMIT.\n> \n> Up to now I haven't seen a practical way to fix this, because the\n> optimizer does its planning bottom-up (start with scan plans, then\n> make joins, etc) and there's no easy way to know at the bottom of\n> the pyramid whether you can expect to take advantage of a LIMIT that\n> exists at the top. For example, if there's a SORT or GROUP step\n> in between, you can't apply the LIMIT to the bottom level; but the\n> bottom guys don't know whether there will be such a step.\n> \n> I have thought of a fairly clean way to attack this problem, which\n> is to represent the cost of a plan in two parts instead of only one.\n> Instead of just \"cost\", have \"startup cost\" and \"cost per tuple\".\n> (Actually, it'd probably be easier to work with \"startup cost\" and\n> \"total cost if all tuples are retrieved\", but that's a trivial\n> representational detail; the point is that our cost model will now be\n> of the form a*N+b when N tuples are retrieved.) It'd be pretty easy\n> to produce plausible numbers for all the plan types we use. Then,\n> the plan comparators would keep any plan that wins on either startup\n> or total cost, rather than only considering total cost. Finally\n> at the top level of planning, when there is a LIMIT the preferred\n> plan would be selected by comparing a*LIMIT+b rather than total cost.\n> \n> I think I can get this done before beta, but before I go into hack-\n> attack mode I wanted to run it up the flagpole and see if anyone\n> has any complaints or better ideas.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n",
"msg_date": "Thu, 10 Feb 2000 11:50:33 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\nA couple of things occur to me. One is that it sounds like this\nproposal could mean that successive SELECTS with LIMIT could\nexecute completely different plans and therefore return inconsistent\nresults. For example, let's say I have 26 customers a through z.\nMy first call to SELECT name from customer limit 3 might return...\na\nb\nc\nand then my next SELECT name from customer limit 3, 3 might return\na\nb \nc\nagain, when I might expect d e f. Of course in this case I could SORT,\nbut unless name is a unique field that won't work. I could sort on oid,\nbut that is expensive if I don't really want it. I could use a cursor,\nbut web pages don't like to do that because you don't know how long you \nmay need to keep the cursor open.\n\nIn short, I think the fact that limit doesn't alter the plan may\nbe more of a feature than a bug.\n\nThe other thing is, I would like at some stage to change limit so\nthat it is attached to a SELECT rather than an entire query so\nyou could...\nSELECT * from x where y in (SELECT y from z LIMIT 10) LIMIT 20;\nand I'm not sure how this would interact with that.\n\n\nTom Lane wrote:\n> \n> We have discussed in the past the need for the optimizer to take LIMIT\n> into account when choosing plans. Currently, since planning is done\n> on the basis of total plan cost for retrieving all tuples, there's\n> no way to prefer a \"fast start\" plan over \"slow start\". But that's\n> what you want if there's a small LIMIT.\n> \n> Up to now I haven't seen a practical way to fix this, because the\n> optimizer does its planning bottom-up (start with scan plans, then\n> make joins, etc) and there's no easy way to know at the bottom of\n> the pyramid whether you can expect to take advantage of a LIMIT that\n> exists at the top. For example, if there's a SORT or GROUP step\n> in between, you can't apply the LIMIT to the bottom level; but the\n> bottom guys don't know whether there will be such a step.\n> \n> I have thought of a fairly clean way to attack this problem, which\n> is to represent the cost of a plan in two parts instead of only one.\n> Instead of just \"cost\", have \"startup cost\" and \"cost per tuple\".\n> (Actually, it'd probably be easier to work with \"startup cost\" and\n> \"total cost if all tuples are retrieved\", but that's a trivial\n> representational detail; the point is that our cost model will now be\n> of the form a*N+b when N tuples are retrieved.) It'd be pretty easy\n> to produce plausible numbers for all the plan types we use. Then,\n> the plan comparators would keep any plan that wins on either startup\n> or total cost, rather than only considering total cost. Finally\n> at the top level of planning, when there is a LIMIT the preferred\n> plan would be selected by comparing a*LIMIT+b rather than total cost.\n> \n> I think I can get this done before beta, but before I go into hack-\n> attack mode I wanted to run it up the flagpole and see if anyone\n> has any complaints or better ideas.\n> \n> regards, tom lane\n> \n> ************\n",
"msg_date": "Fri, 11 Feb 2000 10:01:46 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 10:01 AM 2/11/00 +1100, Chris Bitmead wrote:\n>\n>A couple of things occur to me. One is that it sounds like this\n>proposal could mean that successive SELECTS with LIMIT could\n>execute completely different plans and therefore return inconsistent\n>results. For example, let's say I have 26 customers a through z.\n>My first call to SELECT name from customer limit 3 might return...\n>a\n>b\n>c\n>and then my next SELECT name from customer limit 3, 3 might return\n>a\n>b \n>c\n>again, when I might expect d e f. Of course in this case I could SORT,\n>but unless name is a unique field that won't work.\n\nWell...SQL *is* a set language, and the tuples returned aren't guaranteed\nto be returned in the same order from query to query. The order in\nwhich they're returned is entirely undefined.\n\nYou MUST establish an order on the target tuples if you expect to\nsee them returned in a consistent order. The RDMS only has to\ndeliver the tuples that satisfy the query, nothing more.\n\nYou aren't guaranteed what you want even with the optimizer the\nway it is:\n\ndonb=# select * from foo;\n i \n---\n 1\n 2\n(2 rows)\n\ndonb=# delete from foo where i=1;\nDELETE 1\ndonb=# insert into foo values(1);\nINSERT 147724 1\ndonb=# select * from foo;\n i \n---\n 2\n 1\n(2 rows)\n\nThis isn't the only way to impact the ordering of delivered \nrows, either. VACUUM ANALYZE could do it, for instance...\n\n>The other thing is, I would like at some stage to change limit so\n>that it is attached to a SELECT rather than an entire query so\n>you could...\n>SELECT * from x where y in (SELECT y from z LIMIT 10) LIMIT 20;\n>and I'm not sure how this would interact with that.\n\nSince ORDER BY applies to the target row, the rows returned from\nthe subselect would be in indeterminate order anyway...\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 10 Feb 2000 15:23:48 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus wrote:\n> Well...SQL *is* a set language, and the tuples returned aren't guaranteed\n> to be returned in the same order from query to query. The order in\n> which they're returned is entirely undefined.\n\nWhich would make LIMIT a pretty useless function unless you include\nevery field in your ORDER BY, otherwise LIMIT returns not defined\nresults.\nTo keep strict SET based semantics LIMIT should disallowed unless you\nORDER BY a UNIQUE field, or you ORDER BY with every single field in the\nclause.\n\n> You MUST establish an order on the target tuples if you expect to\n> see them returned in a consistent order. The RDMS only has to\n> deliver the tuples that satisfy the query, nothing more.\n> \n> You aren't guaranteed what you want even with the optimizer the\n> way it is:\n\nI know, I know, but the current behaviour is \"close enough\" for\na lot of applications.\n\n> >The other thing is, I would like at some stage to change limit so\n> >that it is attached to a SELECT rather than an entire query so\n> >you could...\n> >SELECT * from x where y in (SELECT y from z LIMIT 10) LIMIT 20;\n> >and I'm not sure how this would interact with that.\n> \n> Since ORDER BY applies to the target row, the rows returned from\n> the subselect would be in indeterminate order anyway...\n\nOh. Well then I'd like ORDER BY in the subselect too :-).\n",
"msg_date": "Fri, 11 Feb 2000 10:57:12 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> I have thought of a fairly clean way to attack this problem, which\n> is to represent the cost of a plan in two parts instead of only one.\n> Instead of just \"cost\", have \"startup cost\" and \"cost per tuple\".\n> (Actually, it'd probably be easier to work with \"startup cost\" and\n> \"total cost if all tuples are retrieved\", but that's a trivial\n> representational detail; the point is that our cost model will now be\n> of the form a*N+b when N tuples are retrieved.) It'd be pretty easy\n> to produce plausible numbers for all the plan types we use. Then,\n> the plan comparators would keep any plan that wins on either startup\n> or total cost, rather than only considering total cost. Finally\n> at the top level of planning, when there is a LIMIT the preferred\n> plan would be selected by comparing a*LIMIT+b rather than total cost.\n>\n\nI have no objection but have a question.\n\nIt seems current cost estimation couldn't be converted into a*N+b\nform exactly. For example,the cost of seq scan is\n\tcount of pages + count of tuples * cpu_weight + ..\ncount of pages couldn't be converted into a*N form.\nThe cost of index scan is more complicated.\nI thought of no clear way to treat it when I thought about\nthis item once.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 11 Feb 2000 12:20:40 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> ... it sounds like this\n> proposal could mean that successive SELECTS with LIMIT could\n> execute completely different plans and therefore return inconsistent\n> results.\n> In short, I think the fact that limit doesn't alter the plan may\n> be more of a feature than a bug.\n\nA good point (one I'm embarrassed not to have thought about!) but\nI don't think it's a reason not to push forward in this direction.\nWe have had *way* too many complaints about Postgres not being able\nto optimize LIMITed queries, so I'm not willing to ignore the issue;\nsomething's got to be done about it.\n\nAs Don B. points out nearby, there's no guarantee anyway that\nrepeatedly issuing the same query with different LIMIT parameters\nwill get you consistent results. The only upright and morally\ncorrect approach is to use DECLARE CURSOR followed by FETCH commands\n(all within a transaction of course) in order to get results that\nare really all part of a single query. Now DECLARE CURSOR is also\npresently optimized on the basis of fetching the entire result set,\nso that is still a problem --- I neglected to mention before that\nI was intending to tweak the optimizer to optimize that case like a\nmoderate-sized LIMIT.\n\nBut having said that, I hear what you're saying and I think it's\nworth thinking about. Here are four possible alternative responses:\n\n1. Optimize the query as I sketched previously, but using the \"count\"\npart of the LIMIT spec while deliberately ignoring the \"offset\".\nThen you get consistent results for fetching different chunks of the\nquery result as long as you use the same count each time. (And as long\nas no one else is changing the DB meanwhile, but we'll take that as\nread for each of these choices.)\n\n2. Ignore both the count and offset, and optimize any query containing\na LIMIT clause on the basis of some fixed assumption about what fraction\nof the tuples will be fetched (I'm thinking something like 1% to 10%).\nThis allows different fetch sizes to be used without destroying\nconsistency, but it falls down on the goal of correctly optimizing\n\"LIMIT 1\" hacks.\n\n3. Use the count+offset for all it's worth, and document that you\ncan't expect to get consistent results from asking for different\nLIMITed chunks of the \"same\" query unless you use ORDER BY to\nensure consistent ordering of the tuples.\n\n4. Fascist variant of #3: make LIMIT without ORDER BY be an error.\n\nSQL92 does not define LIMIT at all, so it's not much help in\ndeciding what to do. Is there anything in SQL3? What do other\nDBMSes do about this issue? Comments, other variants, better ideas\nanyone?\n\n> The other thing is, I would like at some stage to change limit so\n> that it is attached to a SELECT rather than an entire query so\n> you could...\n> SELECT * from x where y in (SELECT y from z LIMIT 10) LIMIT 20;\n> and I'm not sure how this would interact with that.\n\nSince ORDER BY is only allowed at the top level by SQL92, there\nwould be no way for the user to ensure predictable results from\nsuch a query. I think that'd be a dangerous path to go down.\nHowever, if you had an answer that ensured consistent results from\nqueries with sub-LIMITs, I don't see that there'd be any problem\nwith allowing the optimizer to optimize 'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2000 22:52:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> It seems current cost estimation couldn't be converted into a*N+b\n> form exactly. For example,the cost of seq scan is\n> \tcount of pages + count of tuples * cpu_weight + ..\n> count of pages couldn't be converted into a*N form.\n> The cost of index scan is more complicated.\n\nIt would not be an exact conversion, because the cost of a query is\nclearly *not* a perfectly linear function of the number of tuples\nretrieved before stopping. Another example, besides the ones you\nmention, is that a nested-loop join will suffer a \"hiccup\" in\noutput rate each time it restarts the inner scan, if the inner scan\nis of a kind that has nontrivial startup cost. But I believe that\nthis effect is not very significant compared to all the other\napproximations the optimizer already has to make.\n\nBasically, my proposal is that the plan cost estimation routines\nprovide a \"startup cost\" (cost expended to retrieve the first\ntuple) in addition to the \"total cost\" they already estimate.\nThen, upper-level planning routines that want to estimate the cost\nof fetching just part of the query result will estimate that cost\nby linear interpolation between the startup cost and the total\ncost. Sure, it's just a rough approximation, but it'll be a long\ntime before that's the worst error made by the planner ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2000 23:31:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 22:52 10/02/00 -0500, Tom Lane wrote:\n>\n>But having said that, I hear what you're saying and I think it's\n>worth thinking about. Here are four possible alternative responses:\n>\n\nAnother option is to do what Dec/Rdb does, and allow either optimizer hints\nin a saved plan, or via modified SQL (less portable):\n\n select * from foo limit 1 row optimize for fast first;\n\n\nI also have a question: does the optimizer know about relevant indexes when\nit is trying to return an ordered result set? If not, then 'fast first'\nretrieval may be substantially improved by including such knowledge.\n\nie.\n\n select * from foo order by f1,f2 limit 1 row;\n\nshould be trivial if there is an index on (f1,f2).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 11 Feb 2000 16:38:10 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "For my own curiousity, how does the presence of limit affect a plan\nanyway?\n",
"msg_date": "Fri, 11 Feb 2000 17:15:23 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Another option is to do what Dec/Rdb does, and allow either optimizer hints\n> in a saved plan, or via modified SQL (less portable):\n\n> select * from foo limit 1 row optimize for fast first;\n\nThe former is not going to happen in time for 7.0, and the latter is\nnot entirely palatable --- we are trying to become more SQL-spec-\ncompliant, not less...\n\n> I also have a question: does the optimizer know about relevant indexes when\n> it is trying to return an ordered result set? If not, then 'fast first'\n> retrieval may be substantially improved by including such knowledge.\n\nIt does know about indexes. The problem is that it is making planning\nchoices on the basis of cost to retrieve the entire ordered result set,\nwhich leads to pessimal planning when you don't really want any such\nthing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 01:17:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 10:52 PM 2/10/00 -0500, Tom Lane wrote:\n\n>4. Fascist variant of #3: make LIMIT without ORDER BY be an error.\n>\n>SQL92 does not define LIMIT at all, so it's not much help in\n>deciding what to do. Is there anything in SQL3? What do other\n>DBMSes do about this issue? Comments, other variants, better ideas\n>anyone?\n\nWell ... for my money I never expected LIMIT to be meaningful in\nthe sense of being deterministic without an ORDER BY clause.\n\nBut ... that doesn't mean that some folks might not want to use\nit differently. What if LIMIT 2 were more efficient that COUNT(*)\nin order to determine if more than one row satisfies a condition?\n\nI don't know if that's even a remote possibility given the current\nimplementation, but it is an example where a non-deterministic\ntuple ordering might not matter.\n\nBut I wouldn't feel badly at all if LIMIT limited to queries\nwith ORDER BY. I think this could be done gramatically, i.e.\n\n[query] ORDER BY \n\nis the SQL paradign, and you'd just hang LIMIT on ORDER BY (or\nmore properly at the same grammar level allow them in any order).\n\n[ORDER BY | LIMIT clause]*\n\nin one form of pseudo-grammar, with appropriate semantic checking\nso you can't say ORDER BY .. ORDER BY ...\n\n\n>\n>> The other thing is, I would like at some stage to change limit so\n>> that it is attached to a SELECT rather than an entire query so\n>> you could...\n>> SELECT * from x where y in (SELECT y from z LIMIT 10) LIMIT 20;\n>> and I'm not sure how this would interact with that.\n>\n>Since ORDER BY is only allowed at the top level by SQL92, there\n>would be no way for the user to ensure predictable results from\n>such a query. I think that'd be a dangerous path to go down.\n\nYep.\n\n>However, if you had an answer that ensured consistent results from\n>queries with sub-LIMITs, I don't see that there'd be any problem\n>with allowing the optimizer to optimize 'em.\n\nNo, it's not an optimizer problem. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 10 Feb 2000 22:35:24 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Don Baccus wrote:\n\n> But ... that doesn't mean that some folks might not want to use\n> it differently. What if LIMIT 2 were more efficient that COUNT(*)\n> in order to determine if more than one row satisfies a condition?\n\nselect count(*) > 1 from a;\n\nAnd if that's not efficient, why not optimise _that_, since it \nexpresses directly what you want?\n\n> But I wouldn't feel badly at all if LIMIT limited to queries\n> with ORDER BY. I think this could be done gramatically, i.e.\n> \n> [query] ORDER BY\n\nIf you are going to limit it thus, it only makes sense if you\neither order by a unique key or order by every single column.\nOtherwise, why limit it at all? And that can't be determined\ngramatically.\n",
"msg_date": "Fri, 11 Feb 2000 17:57:59 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> For my own curiousity, how does the presence of limit affect a plan\n> anyway?\n\nAt the moment, it doesn't. But it should. To take an extreme example:\n\n\tSELECT * FROM table WHERE x > 100 ORDER BY x LIMIT 1;\n\nto get the tuple with lowest x > 100. Assuming that there is an index\non x, the right way to implement this is with an indexscan, because a\nsingle probe into the index will pull out the tuple you want. But right\nnow the optimizer will choose a plan as if the LIMIT weren't there,\nie on the basis of estimated total cost to retrieve the whole ordered\nresult set. On that basis it might well choose sequential scan + sort,\nso you'd have to wait around for a sort to complete before you get your\nanswer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 09:59:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Well ... for my money I never expected LIMIT to be meaningful in\n> the sense of being deterministic without an ORDER BY clause.\n\n> But ... that doesn't mean that some folks might not want to use\n> it differently. What if LIMIT 2 were more efficient that COUNT(*)\n> in order to determine if more than one row satisfies a condition?\n\nHmm, that's an excellent example indeed. A slight variant that is\neven more plausible is LIMIT 1 when you just want to know if there\nis any tuple satisfying the WHERE condition, and you don't really\ncare about which one you get.\n\n> I don't know if that's even a remote possibility given the current\n> implementation,\n\nAbsolutely --- COUNT(*) doesn't provide any way of stopping early,\nso a LIMITed query could be far faster. Given an appropriate plan\nof course. The problem right now is that the optimizer is quite\nlikely to pick the wrong plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 10:07:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "On Thu, Feb 10, 2000 at 10:52:12PM -0500, Tom Lane wrote:\n> \n> SQL92 does not define LIMIT at all, so it's not much help in\n> deciding what to do. Is there anything in SQL3? What do other\n> DBMSes do about this issue? Comments, other variants, better ideas\n> anyone?\n> \n\nI know I'm getting in on this late, but I thought I'd answer this.\nThe SQL92 draft only mentions LIMIT in the list of reserved words,\nand once in the index, pointing to a page on lexical elements of SQL.\n\nthe SQL3 draft that Chris pointed me at (Aug94) only mentions LIMIT as a\nlimit clause of a RECURSIVE UNION, whatever that is. (No time to examine\nit right now) This is from the file sql-foundation-aug94.txt.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Fri, 11 Feb 2000 09:41:36 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> \n> the SQL3 draft that Chris pointed me at (Aug94) only mentions LIMIT as a\n> limit clause of a RECURSIVE UNION, whatever that is. (No time to examine\n> it right now) This is from the file sql-foundation-aug94.txt.\n> \n\nIf I understood it right, RECURSIVE UNION is a way to query a tree\nstructured \ntable, whith parent_id's in each child row.\n\nAFAIK even have it in the TODO list ;)\n\nThe LIMIT there probably limits how many levels down the tree are\nqueried.\n\n---------\nHannu\n",
"msg_date": "Fri, 11 Feb 2000 16:41:11 +0000",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> SELECT * FROM table WHERE x > 100 ORDER BY x LIMIT 1;\n\nCould it _ever_ be faster to sort the tuples when there is already an\nindex that can provide them in sorted order?\n\n\n> \n> to get the tuple with lowest x > 100. Assuming that there is an index\n> on x, the right way to implement this is with an indexscan, because a\n> single probe into the index will pull out the tuple you want. But right\n> now the optimizer will choose a plan as if the LIMIT weren't there,\n> ie on the basis of estimated total cost to retrieve the whole ordered\n> result set. On that basis it might well choose sequential scan + sort,\n> so you'd have to wait around for a sort to complete before you get your\n> answer.\n> \n> regards, tom lane\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 13 Feb 2000 23:07:03 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> If I understood it right, RECURSIVE UNION is a way to query a tree\n> structured\n> table, whith parent_id's in each child row.\n> \n> AFAIK even have it in the TODO list ;)\n> \n> The LIMIT there probably limits how many levels down the tree are\n> queried.\n\nOriginal postgres used to be able to do this. The syntax \n\"retrieve* from xxx\" would keep executing (eg traversing a tree) until\ncomplete. Might be worth checking the original sources when you come to\ndo this.\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 13 Feb 2000 23:32:50 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 11:07 PM 2/13/00 +1100, Chris wrote:\n>Tom Lane wrote:\n>> \n>> SELECT * FROM table WHERE x > 100 ORDER BY x LIMIT 1;\n>\n>Could it _ever_ be faster to sort the tuples when there is already an\n>index that can provide them in sorted order?\n\nThat's yet another optimization. Working on optimizing the execution\nof language constructs, whether statement oriented like C or set \noriented like SQL, is largely a matter of accretion. Just because\nyou can make the case with index run fast doesn't mean you don't\nwant to consider the case where an index isn't available.\n\nI think you're on the losing end of this one, Chris. In essence\nyou're asking that the optimizer not take advantage of the\nset-oriented, non-ordered nature of SQL queries in order to make\nyour non-portable code easier to right.\n\nTom's example is only one instance where fully exploiting the \nfact that values returned by queries are unordered. I don't think\nwe can really live with the restriction that queries must always\nreturn tuples in the same order.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 06:51:37 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Chris wrote:\n> \n> Tom Lane wrote:\n> >\n> > SELECT * FROM table WHERE x > 100 ORDER BY x LIMIT 1;\n> \n> Could it _ever_ be faster to sort the tuples when there is already an\n> index that can provide them in sorted order?\n\nThis has been discussed on this list several times, and it appears that\nselect+sort is quite often faster than index scan, mainly due to the fact \nthat tables live on disk and disk accesses are expensive, and when doing \nindex scans:\n\n1- you have to scan two files (index and data), when they are on the same \n disk it is much more 2 times slower than sacnning a single file even\n when doing it sequentially\n\n2- scans on the both files are random access, so seek and latency times \n come into play and readahead is useless\n\n3- you often read the same data page many times\n\n-------------\nHannu\n",
"msg_date": "Sun, 13 Feb 2000 18:24:59 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Could it _ever_ be faster to sort the tuples when there is already an\n> index that can provide them in sorted order?\n\nYes --- in fact usually, for large tables. Once your table gets too\nbig for the disk cache to be effective, indexscan performance approaches\none random-access page fetch per tuple. Sort performance behaves more\nor less as p*log(p) accesses for p pages; and a far larger proportion\nof those accesses are sequential than in the indexscan case. So the\nsort will win if you have more than log(p) tuples per page. Do the\narithmetic...\n\nThe optimizer's job would be far simpler if no-brainer rules like\n\"indexscan is always better\" worked.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Feb 2000 11:53:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Don Baccus wrote:\n>> But ... that doesn't mean that some folks might not want to use\n>> it differently. What if LIMIT 2 were more efficient that COUNT(*)\n>> in order to determine if more than one row satisfies a condition?\n\n> select count(*) > 1 from a;\n\n> And if that's not efficient, why not optimise _that_, since it \n> expresses directly what you want?\n\nPracticality, mostly. To do it that way, the optimizer would have\nto have extremely specific hard-wired knowledge about the behavior\nof count() (which flies in the face of Postgres' open-ended approach\nto aggregate functions); furthermore it would have to examine every\nquery to see if there is a count() - inequality operator - constant\nclause placed in such a way that no other result need be delivered\nby the query. That's a lot of mechanism and overhead to extract the\nsame information that is provided directly by LIMIT; and it doesn't\neliminate the need for LIMIT, since this is only one application\nfor LIMIT (not even the primary one IMHO).\n\nI have currently got it working (I think; not too well tested yet)\nusing the proposal I offered before of \"pay attention to the size\nof LIMIT, but ignore OFFSET\", so that the same query plan will be\nderived from similar queries with different OFFSETs. Does anyone\nhave a substantial gripe with that compromise?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Feb 2000 12:13:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 06:24 PM 2/13/00 +0200, Hannu Krosing wrote:\n>Chris wrote:\n>> \n>> Tom Lane wrote:\n>> >\n>> > SELECT * FROM table WHERE x > 100 ORDER BY x LIMIT 1;\n>> \n>> Could it _ever_ be faster to sort the tuples when there is already an\n>> index that can provide them in sorted order?\n\n>This has been discussed on this list several times, and it appears that\n>select+sort is quite often faster than index scan, mainly due to the fact \n>that tables live on disk and disk accesses are expensive, and when doing \n>index scans:\n>\n>1- you have to scan two files (index and data), when they are on the same \n> disk it is much more 2 times slower than sacnning a single file even\n> when doing it sequentially\n>\n>2- scans on the both files are random access, so seek and latency times \n> come into play and readahead is useless\n>\n>3- you often read the same data page many times\n\nHmmm...yet any studly Oracle type knows that despite whatever veracity\nthis analysis has, in reality Oracle will utilize the index in the\nmanner suggested by Chris and the difference in execution time is,\nwell, astonishing. Even without the limit.\n\nWe just had a discussion regarding this a few days ago over on\nPhilip Greenspun's web/db forum, where someone ran into a situation\nwhere Oracle didn't recognize that the index could be used (involving\nfunction calls, where presently Oracle doesn't dig into the parameter\nlist and to look to see if the referenced columns are indexed when\ndoing its optimization). After tricking Oracle into using the index\nby adding an additional column reference, he got a speedup of well\nover an order of magnitude.\n\nAgain, with no limit clause (which Oracle doesn't implement anyway).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 11:09:34 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 11:53 AM 2/13/00 -0500, Tom Lane wrote:\n>Chris <[email protected]> writes:\n>> Could it _ever_ be faster to sort the tuples when there is already an\n>> index that can provide them in sorted order?\n>\n>Yes --- in fact usually, for large tables. Once your table gets too\n>big for the disk cache to be effective, indexscan performance approaches\n>one random-access page fetch per tuple. Sort performance behaves more\n>or less as p*log(p) accesses for p pages; and a far larger proportion\n\n>of those accesses are sequential than in the indexscan case. So the\n>sort will win if you have more than log(p) tuples per page. Do the\n>arithmetic...\n>\n>The optimizer's job would be far simpler if no-brainer rules like\n>\"indexscan is always better\" worked.\n\nYet the optimizer currently takes the no-brainer point-of-view that\n\"indexscan is slow for tables much larger than the disk cache, therefore\ntreat all tables as though they're much larger than the disk cache\".\n\nThe name of the game in the production database world is to do\neverything possible to avoid a RAM bottleneck, while the point\nof view PG is taking seems to be that RAM is always a bottleneck.\n\nThis presumption is far too pessimistic.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 11:14:27 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 12:13 PM 2/13/00 -0500, Tom Lane wrote:\n>Chris Bitmead <[email protected]> writes:\n>> Don Baccus wrote:\n>>> But ... that doesn't mean that some folks might not want to use\n>>> it differently. What if LIMIT 2 were more efficient that COUNT(*)\n>>> in order to determine if more than one row satisfies a condition?\n>\n>> select count(*) > 1 from a;\n>\n>> And if that's not efficient, why not optimise _that_, since it \n>> expresses directly what you want?\n>\n>Practicality, mostly. To do it that way, the optimizer would have\n>to have extremely specific hard-wired knowledge about the behavior\n>of count() (which flies in the face of Postgres' open-ended approach\n>to aggregate functions);\n\nActually, the aggregate interface could pass in a predicate test that\nthe aggregate function could use to say \"stop\" once it knows that\nthe result of the predicate will be true at the end of the query.\n\nOf the standard aggregates, \"count()\" is probably the only one that\ncould make use of it. And of course only rarely is count() used\nin such a way.\n\nAs someone who has long made his living implementing optimizing\ncompilers, I don't think that optimizing expressions such as the\none Chris mentions is all that difficult a task.\n\nBut there are far more important things to think about implementing\nin Postgres.\n\n>I have currently got it working (I think; not too well tested yet)\n>using the proposal I offered before of \"pay attention to the size\n>of LIMIT, but ignore OFFSET\", so that the same query plan will be\n>derived from similar queries with different OFFSETs. Does anyone\n>have a substantial gripe with that compromise?\n\nNot me, that's for sure.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 11:19:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "On 2000-02-10, Tom Lane mentioned:\n\n> 4. Fascist variant of #3: make LIMIT without ORDER BY be an error.\n\nGot my vote for that. At least make it a notice: \"NOTICE: LIMIT without\nORDER BY results in random data being returned\" -- That'll teach 'em. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sun, 13 Feb 2000 22:43:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Don Baccus wrote:\n\n> >> select count(*) > 1 from a;\n> >\n> >> And if that's not efficient, why not optimise _that_, since it\n> >> expresses directly what you want?\n> >\n> >Practicality, mostly. To do it that way, the optimizer would have\n> >to have extremely specific hard-wired knowledge about the behavior\n> >of count() (which flies in the face of Postgres' open-ended approach\n> >to aggregate functions);\n> \n> Actually, the aggregate interface could pass in a predicate test that\n> the aggregate function could use to say \"stop\" once it knows that\n> the result of the predicate will be true at the end of the query.\n\nThat's the kind of thing I had in mind.\n\n> Of the standard aggregates, \"count()\" is probably the only one that\n> could make use of it. And of course only rarely is count() used\n> in such a way.\n\nI think a lot of the agregates could make use of it. For example, tell\nme all the departments who have spent more than $1000,000 this year...\n\nselect deptid, sum(amount) > 1000000 from purchases group by deptid;\n\n> \n> As someone who has long made his living implementing optimizing\n> compilers, I don't think that optimizing expressions such as the\n> one Chris mentions is all that difficult a task.\n> \n> But there are far more important things to think about implementing\n> in Postgres.\n\nYep.\n\n> \n> >I have currently got it working (I think; not too well tested yet)\n> >using the proposal I offered before of \"pay attention to the size\n> >of LIMIT, but ignore OFFSET\", so that the same query plan will be\n> >derived from similar queries with different OFFSETs. Does anyone\n> >have a substantial gripe with that compromise?\n> \n> Not me, that's for sure.\n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 10:11:36 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Tom Lane wrote:\n\n> I have currently got it working (I think; not too well tested yet)\n> using the proposal I offered before of \"pay attention to the size\n> of LIMIT, but ignore OFFSET\", so that the same query plan will be\n> derived from similar queries with different OFFSETs. Does anyone\n> have a substantial gripe with that compromise?\n\nWould offset be any use if you did make use of it?\n",
"msg_date": "Mon, 14 Feb 2000 10:17:00 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 10:11 AM 2/14/00 +1100, Chris Bitmead wrote:\n\n>> Of the standard aggregates, \"count()\" is probably the only one that\n>> could make use of it. And of course only rarely is count() used\n>> in such a way.\n>\n>I think a lot of the agregates could make use of it. For example, tell\n>me all the departments who have spent more than $1000,000 this year...\n>\n>select deptid, sum(amount) > 1000000 from purchases group by deptid;\n\nThis would be harder, because you could only guarantee that sum is\nof all positive or negative numbers if the user provides a constraint.\n\n>> As someone who has long made his living implementing optimizing\n>> compilers, I don't think that optimizing expressions such as the\n>> one Chris mentions is all that difficult a task.\n>> \n>> But there are far more important things to think about implementing\n>> in Postgres.\n>\n>Yep.\n\nGood, because I was about to repeat myself :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 15:29:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> The optimizer's job would be far simpler if no-brainer rules like\n>> \"indexscan is always better\" worked.\n\n> Yet the optimizer currently takes the no-brainer point-of-view that\n> \"indexscan is slow for tables much larger than the disk cache, therefore\n> treat all tables as though they're much larger than the disk cache\".\n\nAh, you haven't seen the (as-yet-uncommitted) optimizer changes I'm\nworking on ;-)\n\nWhat I still lack is a believable approximation curve for cache hit\nratio vs. table-size-divided-by-cache-size. Anybody seen any papers\nabout that? I made up a plausible-shaped function but it'd be nice to\nhave something with some actual theory or measurement behind it...\n\t\n(Of course the cache size is only a magic number in the absence of any\nhard info about what the kernel is doing --- but at least it will\noptimize big tables differently than small ones now.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Feb 2000 18:43:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 06:43 PM 2/13/00 -0500, Tom Lane wrote:\n\n>Ah, you haven't seen the (as-yet-uncommitted) optimizer changes I'm\n>working on ;-)\n\nVery good!\n\n>What I still lack is a believable approximation curve for cache hit\n>ratio vs. table-size-divided-by-cache-size. Anybody seen any papers\n>about that? I made up a plausible-shaped function but it'd be nice to\n>have something with some actual theory or measurement behind it...\n>\n>(Of course the cache size is only a magic number in the absence of any\n>hard info about what the kernel is doing --- but at least it will\n>optimize big tables differently than small ones now.)\n\nIf you've got the memory and allocate sufficient space to shared\nbuffers and still have plenty of kernel cache space left over, the\noptimizer can hardly be over optimistic - things will fly! :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 16:06:13 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Hi postgresql hackers,\n\nI have a suggestion that might improve the performance of\npostgresql.\nThis is regarding the directory structure of /data/base.\nThe current situation is that every database has one\ndirectory, ie. \"mydb\", so you will have /data/base/mydb directory.\nAll the data files, index files, etc are in the same\n/data/base/mydb directory.\n\nIf I want to split data files and index files to different hardisk, it\nis not possible right now.\nThe only solution right now to improve the performance is to use RAID\nmethod.\n\nMy suggestion is to split files into 4 different directories:\n/data/base/mydb/data\n/data/base/mydb/index\n/data/base/mydb/dictionary\n/data/base/mydb/tmp\n\nSo I can put each directory on different hardisk, so I can\nhave 4 hardisks for 'mydb' database.\n\nIs it doable and a good idea?\n\nRegards,\nChai\n",
"msg_date": "Mon, 14 Feb 2000 09:47:34 +0700",
"msg_from": "Chairudin Sentosa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Suggestion to split /data/base directory"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> I have currently got it working (I think; not too well tested yet)\n>> using the proposal I offered before of \"pay attention to the size\n>> of LIMIT, but ignore OFFSET\", so that the same query plan will be\n>> derived from similar queries with different OFFSETs. Does anyone\n>> have a substantial gripe with that compromise?\n\n> Would offset be any use if you did make use of it?\n\nYes, because the number of tuples that will *actually* get fetched\nis offset+limit. If you had a large offset so that the tuples\ngetting returned were from somewhere near the end of the query,\nthen choosing a fast-start algorithm would be a Bad Idea; you'd\nreally want a plan that optimizes on the basis of total cost\nrather than startup cost.\n\nHmm, I'm on the verge of talking myself out of the compromise ;-).\nI'm not sure how many people will really use large offsets, but\nanyone who does might be a pretty unhappy camper. If you're asking\nfor OFFSET 1000000 LIMIT 1, the thing might pick a nested loop\nwhich is exceedingly fast-start ... but also exceedingly expensive\nwhen you go ahead and fetch many tuples anyway.\n\nPerhaps we should stick to two alternatives:\n\n1. If LIMIT is present, optimize on an assumption that X% of the\ntuples are fetched, where X does *not* depend on the specific\nvalues given for OFFSET or LIMIT. (But we could make X a settable\nparameter...)\n\n2. Optimize using OFFSET+LIMIT as the expected number of tuples to\nfetch. Document that varying OFFSET or LIMIT will not necessarily\ngenerate consistent results unless you specify ORDER BY to force a\nconsistent tuple order.\n\nI don't really like #1, but I can see where #2 might cause some\nunhappiness as well. Comments, opinions?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Feb 2000 23:24:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I have currently got it working (I think; not too well tested yet)\n> >> using the proposal I offered before of \"pay attention to the size\n> >> of LIMIT, but ignore OFFSET\", so that the same query plan will be\n> >> derived from similar queries with different OFFSETs. Does anyone\n> >> have a substantial gripe with that compromise?\n> \n> > Would offset be any use if you did make use of it?\n> \n> Yes, because the number of tuples that will *actually* get fetched\n> is offset+limit. If you had a large offset so that the tuples\n> getting returned were from somewhere near the end of the query,\n> then choosing a fast-start algorithm would be a Bad Idea; you'd\n> really want a plan that optimizes on the basis of total cost\n> rather than startup cost.\n> Hmm, I'm on the verge of talking myself out of the compromise ;-).\n> I'm not sure how many people will really use large offsets, but\n> anyone who does might be a pretty unhappy camper. If you're asking\n> for OFFSET 1000000 LIMIT 1, the thing might pick a nested loop\n> which is exceedingly fast-start ... but also exceedingly expensive\n> when you go ahead and fetch many tuples anyway.\n> \n> Perhaps we should stick to two alternatives:\n> \n> 1. If LIMIT is present, optimize on an assumption that X% of the\n> tuples are fetched, where X does *not* depend on the specific\n> values given for OFFSET or LIMIT. (But we could make X a settable\n> parameter...)\n> \n> 2. Optimize using OFFSET+LIMIT as the expected number of tuples to\n> fetch. Document that varying OFFSET or LIMIT will not necessarily\n> generate consistent results unless you specify ORDER BY to force a\n> consistent tuple order.\n> \n> I don't really like #1, but I can see where #2 might cause some\n> unhappiness as well. Comments, opinions?\n\nI agree you should probably go the whole hog one way or the other. I\nthink\nignoring offset+limit is a useful option, but like I said at the\nbeginning, it doesn't bother me _that_ much.\n",
"msg_date": "Mon, 14 Feb 2000 15:32:31 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 09:47 AM 2/14/00 +0700, Chairudin Sentosa wrote:\n\n>My suggestion is to split files into 4 different directories:\n>/data/base/mydb/data\n>/data/base/mydb/index\n>/data/base/mydb/dictionary\n>/data/base/mydb/tmp\n\nMy preference would be for a simplistic \"create tablespace\" construct,\nso location information could be captured within the database itself.\n\nWe've had discussions about this in the past and there seems to be\nsome recognition that the ability to spread stuff around disk drives\nmight be useful. I mean, all those commercial sites that do it after\nmeasuring their bottlenecks can't ALL be wrong, right?\n\n\n>\n>So I can put each directory on different hardisk, so I can\n>have 4 hardisks for 'mydb' database.\n\nYou can already do this in an ugly fashion, by moving individual\nfiles via links (ln -s). ls *idx*, that kind of thing to find\nyour index tables (if you suffix them with \"idx\", then move and\nln to them.\n\n>\n>Is it doable and a good idea?\n\nDoable, but IMO a bad idea because it lowers the motivation for doing\na relatively simple CREATE TABLESPACE hack that gives even more \nflexibility, and allows the db user to query where their tables\nare stored within the db rather than depend on \"ls\".\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 20:41:35 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Suggestion to split /data/base directory"
},
{
"msg_contents": "At 11:24 PM 2/13/00 -0500, Tom Lane wrote:\n>Chris Bitmead <[email protected]> writes:\n>> Tom Lane wrote:\n>>> I have currently got it working (I think; not too well tested yet)\n>>> using the proposal I offered before of \"pay attention to the size\n>>> of LIMIT, but ignore OFFSET\", so that the same query plan will be\n>>> derived from similar queries with different OFFSETs. Does anyone\n>>> have a substantial gripe with that compromise?\n>\n>> Would offset be any use if you did make use of it?\n>\n>Yes, because the number of tuples that will *actually* get fetched\n>is offset+limit.\n\nBravo, you're on it! I resisted responding...so far this thread\nis your's, baby.\n\n>2. Optimize using OFFSET+LIMIT as the expected number of tuples to\n>fetch. Document that varying OFFSET or LIMIT will not necessarily\n>generate consistent results unless you specify ORDER BY to force a\n>consistent tuple order.\n>\n>I don't really like #1, but I can see where #2 might cause some\n>unhappiness as well. Comments, opinions?\n\nAnyone unhappy about #2 doesn't really understand the SQL model.\n\nMy suggestion's pretty simple - the database world is full of folks\nwho are professionals and who understand the SQL model. \n\nWe shouldn't penalize them for their professionalism.\n\nThose who don't understand the SQL model should read the docmentation\nyou mention, of course, but the very fact that SQL doesn't impose\nan ordering on the returned tuples is so basic to the language that\nif they don't understand it, the doc should also recommend them to\n\"set theory for dummies\" and \"SQL for dummies\" (unless the latter\nwas actually written by a dummy). \n\nIn my narrow-minded compiler-writer space, it is not MY PROBLEM if\npeople don't bother learning the language they use. I may or may\nnot choose to become one who teaches the language, but whether or not\nI do has nothing to do with the implementation of the language.\n\nIt is perfectly fair to presume people understand the language. It\nis their job to learn it.\n\nIf they're surprised by how the language works, then they should've\nconsidered buying an SQL book, all of which that I've seen EMPHASIZE\nthe set orientation, and non-orderedness of queries.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 20:57:11 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 03:32 PM 2/14/00 +1100, Chris Bitmead wrote:\n\n>I agree you should probably go the whole hog one way or the other. I\n>think\n>ignoring offset+limit is a useful option, but like I said at the\n>beginning, it doesn't bother me _that_ much.\n\nIt should bother you that folks who understand how SQL works might\nbe penalized in order to insulate the fact that those who don't know\nhow SQL works from an understanding of their own ignorance...\n\nShouldn't we be more concerned with folks who bother to read an\nSQL primer? Or Oracle or Informix docs on SQL?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 13 Feb 2000 20:59:01 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 03:32 PM 2/14/00 +1100, Chris Bitmead wrote:\n> \n> >I agree you should probably go the whole hog one way or the other. I\n> >think\n> >ignoring offset+limit is a useful option, but like I said at the\n> >beginning, it doesn't bother me _that_ much.\n> \n> It should bother you that folks who understand how SQL works might\n> be penalized in order to insulate the fact that those who don't know\n> how SQL works from an understanding of their own ignorance...\n> \n> Shouldn't we be more concerned with folks who bother to read an\n> SQL primer? Or Oracle or Informix docs on SQL?\n\nLIMIT is not SQL, both as a technical fact, and philosophically\nbecause it reaches outside of set theory. What LIMIT does without\nORDER BY is non-deterministic, and therefore a subjective matter of\nwhat is the most useful: a faster answer, or a more consistant answer.\nMy predudices are caused by what I use PostgreSQL for, which is\nmore favourable to the latter.\n",
"msg_date": "Mon, 14 Feb 2000 16:28:53 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 23:24 13/02/00 -0500, Tom Lane wrote:\n>\n>Perhaps we should stick to two alternatives:\n>\n>1. If LIMIT is present, optimize on an assumption that X% of the\n>tuples are fetched, where X does *not* depend on the specific\n>values given for OFFSET or LIMIT. (But we could make X a settable\n>parameter...)\n>\n>2. Optimize using OFFSET+LIMIT as the expected number of tuples to\n>fetch. Document that varying OFFSET or LIMIT will not necessarily\n>generate consistent results unless you specify ORDER BY to force a\n>consistent tuple order.\n>\n>I don't really like #1, but I can see where #2 might cause some\n>unhappiness as well. Comments, opinions?\n\n#1 seems pretty nasty as a concept, unless of course this actually reflects\nthe way that PG retrieves rows. My guess is that it will have to retrieve\nrows 1 to (offset + limit), not (offset) to (offset + limit), so the whole\nappreximation should be based on #2. \n\n[Aside: I suspect that trying to solve problems for people who want to use\ncontext free (web) interfaces to retrieve blocks of rows is not a job for\nthe optimizer. It is far more suited to cursors and/or local temporary\ntables, both of which require some context].\n\n#2 seems more correct, in that it reflects a good estimation, but\npessimistic: with good indexes defined, the query may well only need to do\na scan of the index to get up to the 'offset-th' row. This, I am sure, must\nbe faster than retrieving all rows up to OFFSET.\n\nThis leaves two questions:\n\na. Does the optimizer know how to do 'index-only' queries (where all fields\nare satisfied by the index)\n\nb. Just to clarify, OFFSET does affect the tuples actually returned,\ndoesn't it?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 14 Feb 2000 16:41:23 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "\nBTW, in the absense of an ORDER BY clause, doesn't offset totally\nlose its meaning? If you're going to do this optimisation,\nyou may as well ignore offset entirely in this case.\n",
"msg_date": "Mon, 14 Feb 2000 16:47:29 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> #1 seems pretty nasty as a concept, unless of course this actually reflects\n> the way that PG retrieves rows. My guess is that it will have to retrieve\n> rows 1 to (offset + limit), not (offset) to (offset + limit), so the whole\n> appreximation should be based on #2. \n\nRight --- if we could start the query in the middle this would all be\na lot nicer, but we can't. The implementation of OFFSET is just to\ndiscard the first N tuples retrieved before beginning to hand any tuples\nback to the client. So the \"right\" approach for the optimizer is to\nassume that OFFSET+LIMIT tuples will be retrieved. The trouble is that\nthat can mean that the query plan changes depending on OFFSET, which\nleads to consistency problems if you don't lock down the tuple ordering\nwith ORDER BY.\n\n> a. Does the optimizer know how to do 'index-only' queries (where all fields\n> are satisfied by the index)\n\nPostgres doesn't have indexes that allow index-only queries --- you\nstill have to fetch the tuples, because the index doesn't carry\ncommit status. I think that's irrelevant anyway, since we're not\nonly interested in the behavior for simple queries...\n\n> b. Just to clarify, OFFSET does affect the tuples actually returned,\n> doesn't it?\n\nOf course.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Feb 2000 01:32:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Don Baccus wrote:\n> >\n> > At 03:32 PM 2/14/00 +1100, Chris Bitmead wrote:\n> >\n> > >I agree you should probably go the whole hog one way or the other. I\n> > >think\n> > >ignoring offset+limit is a useful option, but like I said at the\n> > >beginning, it doesn't bother me _that_ much.\n> >\n> > It should bother you that folks who understand how SQL works might\n> > be penalized in order to insulate the fact that those who don't know\n> > how SQL works from an understanding of their own ignorance...\n> >\n> > Shouldn't we be more concerned with folks who bother to read an\n> > SQL primer? Or Oracle or Informix docs on SQL?\n> \n> LIMIT is not SQL, both as a technical fact, and philosophically\n> because it reaches outside of set theory.\n\nI see limit as a shortcut (plus an optimizer hint) for the sequence\nDECLARE CURSOR - MOVE offset - FETCH limit - CLOSE CURSOR\n\nIt's utility was much debated befor it was included in Postgres, \nthe main argument for inclusion being \"mySQL has it and it's useful \nfor fast-start queries\", the main argument against being \"it's not SQL,\npeople won't understand it a and will start to misuse it\".\n\nMaybe we should still discourage the use of LIMIT, and rather introduce \nanother \"mode\" for optimiser, activated by SET FastStart TO 'ON'.\nThen queries with limit could be rewritten into\nSET FastStart to 'ON';\nDECLARE\nMOVE\nFETCH\nCLOSE\nSET FastStart to PREVIOUS_VALUE;\n\nalso maybe we will need PUSH/POP for set commands ?\n\n> What LIMIT does without ORDER BY is non-deterministic, and therefore \n> a subjective matter of what is the most useful: a faster answer, \n> or a more consistant answer.\n\nAs SQL queries are all one-time things you can't be \"consistent\". \nIt's like being able to grab the same set of socks from a bag and \nthen trying to devise a strategy for getting them in same order \nwithout sorting them (i.e. possible but ridiculous)\n\nIf you need them in some order, you use ORDER BY, if you don't need \nany order you omit ORDER BY.\n\n> My predudices are caused by what I use PostgreSQL for, which is\n> more favourable to the latter.\n\nWhats wrong with using ORDER BY ? \n\nI can't imagine a set of queries that need to be consistent _almost_\nall the time, but without any order.\n\nIf you really need that kind of behaviour, the right decision is to \nselect the rows into a work table that has an additional column for \npreserving order and then do the limit queries from that table.\n\nBut in that case it is often faster to have an index on said column\nand to do \n WHERE ID BETWEEN OFFSET AND OFFSET+LIMIT\n ORDER BY ID\nthan to use LIMIT, more so for large offsets.\n",
"msg_date": "Mon, 14 Feb 2000 11:41:53 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Hannu Krosing wrote:\n> As SQL queries are all one-time things you can't be \"consistent\".\n> It's like being able to grab the same set of socks from a bag and\n> then trying to devise a strategy for getting them in same order\n> without sorting them (i.e. possible but ridiculous)\n> \n> If you need them in some order, you use ORDER BY, if you don't need\n> any order you omit ORDER BY.\n> \n> > My predudices are caused by what I use PostgreSQL for, which is\n> > more favourable to the latter.\n> \n> Whats wrong with using ORDER BY ?\n\nOnly that it's non intuitive that ORDER BY should change the actual\nresults of a series of LIMIT queries, not just the order. If there are\n100 records, and I do 10x LIMIT 10,offset queries one might expect to\nget all 100 records. And currently you do (barring something unusual\nlike a vacuum at an inopportune moment that drastically changes\nstatistics).\n \n> I can't imagine a set of queries that need to be consistent \n> _almost_ all the time, but without any order.\n> \n> If you really need that kind of behaviour, the right decision is \n>to select the rows into a work table that has an additional column \n>for preserving order and then do the limit queries from that \n>table.\n\nImpractical for stateless web based stuff where keeping state around is\npainful if not impossible.\n\nI'm just playing devils advocate here. Changing this is probably not\ngoing to hurt me, I just think it could confuse a lot of people.\n \n> But in that case it is often faster to have an index on said column\n> and to do\n> WHERE ID BETWEEN OFFSET AND OFFSET+LIMIT\n> ORDER BY ID\n> than to use LIMIT, more so for large offsets.\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Mon, 14 Feb 2000 22:47:50 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\nHow about this as a compromise:\n\nIf you give an offset without an ORDER BY the offset\nis useless if this optimisation is in place. If you\nallowed the offset with the optimisation and no\norder by it would be encouraging broken behaviour.\n\nSo therefore it would be reasonable to optimise a \nlimit,offset query with no order by as if there were\nno offset. This would give consistent results, albeit\nit may not choose the best plan. But at least it \nwon't hurt anyone.\n\nThe only snag is that it's not technically correct to\nhave an offset unless the ORDER BY yields a unique\ncriteria. If it's not unique, either because that\nfield is declared UNIQUE or because every single\nfield is mentioned in the order by, then optimisation\nshould be turned off if there is an offset. If it is\nallowed people will randomly get missing results. I \nmean the only purpose of OFFSET is to get something \nlike consistency between calls.\n\nThe thing is, I'll bet a whole lot of people will use\nLIMIT,OFFSET with an ORDER BY, just not a fully unique\nORDER BY. That's why I find this \"optimisation\" \nquestionable. Unless you're _extremely_ careful with \nyour ORDER BY clause your results would be crap. Or\nif the above idea is implemented, the execution\nplan would be crap. If offset were not available,\nthen none of this would matter.\n\nIf this optimisation is implemented, are we going to\ncarefully explain exactly when an ORDER BY clause will\nand won't yield consistent results? Because not just\nany ORDER BY is good enough. Anybody who read that\nmanual page is probably going to be very confused.\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Mon, 14 Feb 2000 23:12:36 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 23:12 14/02/00 +1100, Chris wrote:\n>\n>How about this as a compromise:\n>\n>If you give an offset without an ORDER BY the offset\n>is useless if this optimisation is in place. If you\n>allowed the offset with the optimisation and no\n>order by it would be encouraging broken behaviour.\n\nNot that I would do this necessarily, but \n\n select * from t where <stuff> offset 1 limit 1\n\nis a valid way of checking for duplicates. I would hate to see the\noptimization turned off in this case.\n\n\n>So therefore it would be reasonable to optimise a \n>limit,offset query with no order by as if there were\n>no offset. This would give consistent results, albeit\n>it may not choose the best plan. But at least it \n>won't hurt anyone.\n...etc\n\nThe problem with using a stateless connection is that you have no\ntransaction control, so can not control table contents between calls. eg.\nif it contains:\n\nf\n-\nabel tasman\ncharles sturt\nferdinand marcos\n\n(spot the odd one out)\n\nand do a 'select * from t order by f offset 0 limit 2', then someone adds\n'bruce stringsteen' and you try to get the next two rows via 'select * from\nt order by f offset 2 limit 2', you will get 'charles sturt' again, and\nmiss bruce.\n\nEither you have to say that the database is almost never updated (ie. it's\njust a copy of real data, used for web applications), in which case you can\nadd all sorts of fields for optimizing stateless calls (notably an ID\nfield), or you have to implement some kind of state preservation, and dump\nID's into a temporary table or use 'held' cursors, which is not really that\nhard [Don't know if PG supports either, but you can 'fake' temporary tables\npretty easily].\n\nI may have missed something in what you need, but someone else has already\nmentioned using 'MOVE' within a cursor, and it still seems to me that\nputting the optimizer through hoops to achieve the result is liable to be a\nproblem in the long term. \n\neg. The Dec/Rdb optimizer actually assesses it's strategy while it's\nrunning. If the query is taking too long, or the estimates it used prove\ntoo inaccurate, it may change strategy. If PG implemented such a thing,\nthen this whole approach to offset/limit would be blown away - a strategy\nwill change depending on the data retrieved. It would be a pity if this\nsort of improvement in the optimizer were blocked because of problems\ncaused by breaking successive calls to offset/limit queries.\n\nMaybe either 'held cursors' or 'local temporary tables' could be added to\nthe ToDo list for a future release.\n\nAs to documenting the behaviour, I suspect that any NOTICE should also say\n'This behaviour may change in the future - don't rely on it unless you like\nliving dangerously'.\n\nJust my 0.02c, but I don't like putting limits on an optimizer. \n\nAs an aside, and because I like bringing this thing up, stored query\nstrategies would solve the problem for selected queries; you could specify\nthe strategy to be used in all executions of a prticular query...maybe this\ncould go on the ToDo list? ;-}\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 15 Feb 2000 00:17:55 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 04:28 PM 2/14/00 +1100, Chris Bitmead wrote:\n\n>LIMIT is not SQL\n\nNo, of course not but of course you're ignoring my point\n\n>My predudices are caused by what I use PostgreSQL for, which is\n>more favourable to the latter.\n\nThis, actually, IS my primary point. Tailoring a product to your\npersonal prejudices when it is meant to be used by a very wide\nrange of folks is not wise.\n\nIf Postgres is to be tailored to any particular person's \nprejudices, why yours and not mine? Or Tom's? Or Bruce's?\n\nThe reality is that the developers apparently made the decision\nto make Postgres into a real, albeit open source, product with\nthe intention that it receive wide use.\n\nTHAT - or so I believe - is the goal, not to tailor it to\nany one person (or any small set of persons) particular prejudices.\n\nThat, for instance, is why it was decided to turn PG into an SQL92\ncompliant RDBMS.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 06:47:32 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\n>> 4. Fascist variant of #3: make LIMIT without ORDER BY be an error.\n>\n>Got my vote for that. At least make it a notice: \"NOTICE: LIMIT without\n>ORDER BY results in random data being returned\" -- That'll teach 'em. ;)\n\nGiven the nature of SQL, how could it be otherwise. Select is defined\nto be unordered. This seems to advocate building a generic SQL\ntutorial into postgreSQL.\n\nI for one would very much rather not have that notice. My reasoning\nis thus:\n\nSay I need a quick shell script to verify that a table has been loaded\nwith reasonable values as part of a cron procedure. One way to do the\nmight be to make a shell script:\n\n#!/bin/sh\nif ! psql feature -tc \"select * from elm limit 1\" | egrep \"^ +[0-9]+|\" >/dev/null;\nthen\necho no data loaded;\nfi\n\nThus, I cron this and get email if there is no record returned.\nAFAICT, this is what should happen. But if you start adding wornings\nto this perfectly valid query, which will presumedly come out on\nSTDERR, I will get email from this, even though the query and its\nreturns were valid and expected. And I don't want to direct stderr to\n/dev/null because I do want to be warned if there really is an error.\n\nJust my $0.02 worth.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Mon, 14 Feb 2000 09:51:06 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 11:41 AM 2/14/00 +0200, Hannu Krosing wrote:\n\n>It's utility was much debated befor it was included in Postgres, \n>the main argument for inclusion being \"mySQL has it and it's useful \n>for fast-start queries\", the main argument against being \"it's not SQL,\n>people won't understand it a and will start to misuse it\".\n\nWell, it appears people have started to misuse it! :)\n\nOracle has recently (8i or 8.1.6 if you prefer) offered something similar,\nbut it gives weird results depending on whether or not you have an index\non the column. There's a kludgey workaround, which I forget since I don't\nuse Oracle, only laugh maniacly when it fails to install on a linux box\nwith less than 256MB combined RAM and swap space (i.e. virtual memory).\n\n>Maybe we should still discourage the use of LIMIT, and rather introduce \n>another \"mode\" for optimiser, activated by SET FastStart TO 'ON'.\n>Then queries with limit could be rewritten into\n>SET FastStart to 'ON';\n>DECLARE\n>MOVE\n>FETCH\n>CLOSE\n>SET FastStart to PREVIOUS_VALUE;\n>\n>also maybe we will need PUSH/POP for set commands ?\n\nWell...personally I don't see LIMIT as being particularly harmful,\nand it is a convenience. Remember, for the web space you're speaking\nof keeping overhead low is a real concern, and requiring a series\nof queries where currently only one needed will probably go over like\na lead ballon.\n\nIf the documentation actually pointed out that LIMIT in the absence\nof an ORDER BY clause probably doesn't do what you want, fewer folks\nmight expect it to work any differently than it does.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 06:59:18 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 10:47 PM 2/14/00 +1100, Chris wrote:\n\n>Only that it's non intuitive that ORDER BY should change the actual\n>results of a series of LIMIT queries, not just the order. If there are\n>100 records, and I do 10x LIMIT 10,offset queries one might expect to\n>get all 100 records.\n\nThe only person who will expect that is the person who hasn't bothered\nto learn the fundamental SQL property that rows returned by queries\ncome back in non-deterministic order.\n\nThis is a FUNDAMENTAL concept in SQL, one that is mentioned in every\nSQL book I've seen.\n\nThe same person probably expects NULL = NULL to return true, too.\n\nSo what?\n\n> And currently you do (barring something unusual\n>like a vacuum at an inopportune moment that drastically changes\n>statistics).\n\nOr an insert by another back end, not at all uncommon in the\nkind of web environment where this construct is frequently\nused.\n\n>I'm just playing devils advocate here. Changing this is probably not\n>going to hurt me, I just think it could confuse a lot of people.\n\nSee above.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 07:05:19 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 11:12 PM 2/14/00 +1100, Chris wrote:\n\n>So therefore it would be reasonable to optimise a \n>limit,offset query with no order by as if there were\n>no offset. This would give consistent results, albeit\n>it may not choose the best plan. But at least it \n>won't hurt anyone.\n\nWhy bother?\n\nIt will only give consistent results if the table doesn't\nchange, which is only likely to be during testing if the \ntable is one which is inserted into, updated, and the like\nduring production, such as is true of bulletin boards and\nthe like.\n\nAnd you normally want to order such queries anyway, by date\nor by some ranking criteria.\n\nYou are making a mountain out of a molehill, here. Or, \na mountain out of a playa, there's really no molehill \neven because your code's broken to begin with.\n\n>If this optimisation is implemented, are we going to\n>carefully explain exactly when an ORDER BY clause will\n>and won't yield consistent results? Because not just\n>any ORDER BY is good enough.\n\nThis is already true in SQL as it is, EVEN WITHOUT \nLIMIT. If your ORDER BY isn't good enough, each time\nyou query the db you might get rows back in a different\norder.\n\nEven if you grab all the rows and walk through them\nyourself, discarding the first OFFSET rows and processing\nthe LIMIT rows, when you revisit and start over you have\nexactly the SAME non-determinancy to worry about.\n\nIt has nothing to do with LIMIT, Chris. It really doesn't.\n\nIt has to do with your desire to make broken code \"work\"\nin a very limited set of circumstances that don't match\nreal world conditions often at all.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 07:14:01 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n> Just my 0.02c, but I don't like putting limits on an optimizer. \n\nThat's my feeling too. I'm leaning towards letting the optimizer do the\nbest it can with the given query (which means using OFFSET+LIMIT as the\nestimated number of tuples to be fetched), and documenting the potential\ngotcha as best we can. Something like:\n\nCAUTION: if you repeat a query several times with different OFFSET or\nLIMIT values to fetch different portions of the whole result, you will\nfind that you get inconsistent results unless you specify an ORDER BY\ncondition that is strong enough to ensure that all selected tuples must\nappear in a unique order. Without ORDER BY, the system is free to\nreturn the tuples in any order it finds convenient --- and it may well\nmake different implementation choices leading to different orderings\ndepending on the OFFSET and LIMIT values. In general, you should be\nvery wary of using OFFSET or LIMIT with an unordered or partially\nordered query; you will get a difficult-to-predict, implementation-\ndependent subset of the selected tuples.\n\nIs that clear enough? Can anyone improve on the wording?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Feb 2000 14:27:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 02:27 PM 2/14/00 -0500, Tom Lane wrote:\n\n>CAUTION: if you repeat a query several times with different OFFSET or\n>LIMIT values to fetch different portions of the whole result, you will\n>find that you get inconsistent results unless you specify an ORDER BY\n>condition that is strong enough to ensure that all selected tuples must\n>appear in a unique order. Without ORDER BY, the system is free to\n>return the tuples in any order it finds convenient\n\nPersonally, I would generalize this and leave out the reference to\nLIMIT and OFFSET, except perhaps to point out that this is one\nparticular construct that confuses people.\n\nAs PG matures, so will the optimizer and query engine, and people\nwho've written code that depends on tuples being returned in a\nsingle consistent order might find themselves in for a rude shock.\n\nA well-deserved one (IMO), but still a shock.\n\nThe documentation won't stop most people who want to do this\nfrom doing so, they'll test and try to \"trick\" the system by\ntaking advantage of behavior that might not be consistent in\nfuture releases.\n\nStill...if it stops even ONE person from doing this, the doc will\ndo some good.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 11:36:34 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 10:47 PM 2/14/00 +1100, Chris wrote:\n> \n> >Only that it's non intuitive that ORDER BY should change the actual\n> >results of a series of LIMIT queries, not just the order. If there are\n> >100 records, and I do 10x LIMIT 10,offset queries one might expect to\n> >get all 100 records.\n> \n> The only person who will expect that is the person who hasn't bothered\n> to learn the fundamental SQL property that rows returned by queries\n> come back in non-deterministic order.\n> \n> This is a FUNDAMENTAL concept in SQL, one that is mentioned in every\n> SQL book I've seen.\n\nIt's a logical fact that the existance of \"offset\", automatically\nimplies\nordering, no matter how many SQL textbooks you quote.\n\n> It will only give consistent results if the table doesn't\n>change, which is only likely to be during testing if the \n>table is one which is inserted into, updated, and the like\n>during production, such as is true of bulletin boards and\n>the like.\n\nIt's actually very typical for web applications to want to search\nthrough\nhistorical stuff that doesn't change any more. And ordering by title\nor something might not be good enough.\n\nIMHO, that's a better reasoning that wanting to misuse LIMIT to figure\nout if there are duplicates or something, just because nobody can\nbe bothered optimising the correct SQL to do that.\n",
"msg_date": "Tue, 15 Feb 2000 10:30:17 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 11:41 AM 2/14/00 +0200, Hannu Krosing wrote:\n >Maybe we should still discourage the use of LIMIT, and rather introduce\n> >another \"mode\" for optimiser, activated by SET FastStart TO 'ON'.\n> >Then queries with limit could be rewritten into\n> >SET FastStart to 'ON';\n> >DECLARE\n> >MOVE\n> >FETCH\n> >CLOSE\n> >SET FastStart to PREVIOUS_VALUE;\n> >\n> >also maybe we will need PUSH/POP for set commands ?\n> \n> Well...personally I don't see LIMIT as being particularly harmful,\n> and it is a convenience. Remember, for the web space you're speaking\n> of keeping overhead low is a real concern, and requiring a series\n> of queries where currently only one needed will probably go over like\n> a lead ballon.\n\nI meant that the _backend_ could (in some distant future, when the \noptimiser is perfect :) implement LIMIT as above sequence.\n\n---------------\nHannu\n",
"msg_date": "Tue, 15 Feb 2000 02:43:45 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> This is a FUNDAMENTAL concept in SQL, one that is mentioned in every\n> SQL book I've seen.\n> \n> The same person probably expects NULL = NULL to return true, too.\n> \n\nIIRC SQL3 defines different /classes/ of nulls where the above could be \ntrue if the NULLs belong to the same class. \n\nI.e. the absence of an orange is equal to the absence of the same orange,\nbut not equal to the absence of an apple (and possibly another orange) ;)\n\nI may of course be completely wrong, as I did not read it too carefully \nbeing after completely other things at that time. \n\nI also could not figue out the use for such a feature.\n\n----------------\nHannu\n",
"msg_date": "Tue, 15 Feb 2000 02:52:25 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 10:30 AM 2/15/00 +1100, Chris Bitmead wrote:\n\n>It's a logical fact that the existance of \"offset\", automatically\n>implies\n>ordering, no matter how many SQL textbooks you quote.\n\nChris, that is your opinion and judging from the responses of other\nfolks on this list, it appears to be very much a minority opinion.\n\nMinority of one, as a matter of fact. There has been a parade\nof posts disagreeing with your opinion.\n\nWhy not give up and get on with your life before I get tired of\nbeing polite? I'm *much* more stubborn than you are, particularly\nwhen I'm right.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 16:54:48 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Philip Warner <[email protected]> writes:\n> > Just my 0.02c, but I don't like putting limits on an optimizer. \n> \n> That's my feeling too. I'm leaning towards letting the optimizer do the\n> best it can with the given query (which means using OFFSET+LIMIT as the\n> estimated number of tuples to be fetched), \n\nWhat about cursors ?\nI heard from Jan that we could specify 'LIMIT ALL' to tell optimizer that\nthe response to get first rows is needed.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Tue, 15 Feb 2000 10:00:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 02:43 AM 2/15/00 +0200, Hannu Krosing wrote:\n>Don Baccus wrote:\n\n>> Well...personally I don't see LIMIT as being particularly harmful,\n>> and it is a convenience. Remember, for the web space you're speaking\n>> of keeping overhead low is a real concern, and requiring a series\n>> of queries where currently only one needed will probably go over like\n>> a lead ballon.\n>\n>I meant that the _backend_ could (in some distant future, when the \n>optimiser is perfect :) implement LIMIT as above sequence.\n\nOops! Sorry...at the moment I'm near to loathing the very existence\nof LIMIT so misunderstood :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 17:00:12 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 02:52 AM 2/15/00 +0200, Hannu Krosing wrote:\n>Don Baccus wrote:\n>> \n>> This is a FUNDAMENTAL concept in SQL, one that is mentioned in every\n>> SQL book I've seen.\n>> \n>> The same person probably expects NULL = NULL to return true, too.\n>> \n>\n>IIRC SQL3 defines different /classes/ of nulls where the above could be \n>true if the NULLs belong to the same class. \n\n>I.e. the absence of an orange is equal to the absence of the same orange,\n>but not equal to the absence of an apple (and possibly another orange) ;)\n\n>I may of course be completely wrong, as I did not read it too carefully \n>being after completely other things at that time. \n\nMy recent foray into the SQL3 draft with Jan in order to figure out\nMATCH <unspecified> semantics makes me suspicious of anyone's claim to\nunderstand what the standard says :)\n\nParticularly the authors!\n\nI'm carrying a length of rope and am keeping mindful of the nearest\nlamp post just in case I run across one in the street by accident.\n\n>I also could not figue out the use for such a feature.\n\nWell, I just looked at Date's summary of SQL3 and while he talks \nabout the new user datatype and mild object-oriented innovations,\nhe doesn't talk about any change in the meaning of NULL. Since\nhe makes no effort to hide his loathing for NULL or three-valued\nlogic as implemented in SQL, if it had changed I'm certain he\nwould've mentioned it.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 17:07:08 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 10:30 AM 2/15/00 +1100, Chris Bitmead wrote:\n> \n> >It's a logical fact that the existance of \"offset\", automatically\n> >implies\n> >ordering, no matter how many SQL textbooks you quote.\n> \n> Chris, that is your opinion and judging from the responses of other\n> folks on this list, it appears to be very much a minority opinion.\n> \n> Minority of one, as a matter of fact. There has been a parade\n> of posts disagreeing with your opinion.\n\nI've heard no-one say that offset is meaningful or in any sense\nuseful in the absense of order. If it means something please \nenlighten us. If not, try reading before posting.\n\n> Why not give up and get on with your life before I get tired of\n> being polite? I'm *much* more stubborn than you are, particularly\n> when I'm right.\n",
"msg_date": "Tue, 15 Feb 2000 14:25:41 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "At 02:25 PM 2/15/00 +1100, Chris Bitmead wrote:\n\n>I've heard no-one say that offset is meaningful or in any sense\n>useful in the absense of order. If it means something please \n>enlighten us. If not, try reading before posting.\n\nActually the \"limit 1 offset 1\" example for uniqueness DID actually\ngive a meaningful hack on the usefulness of lack of order.\n\nThe basic problem, Chris, is that you want to rape the optimizer\nin order to stroke ...\n\nwell...I'll be nice for one more post.\n\nBut...I'm losing patience.\n\nHell...push me and I'll just start deleting your questions and answers\non photo.net. After all, you don't understand that weight isn't the only\nparameter that contributes to the stability of tripod support...\"I'm\nleery of Gitzo carbon fiber tripods because I seek WEIGHT!\". If you\nseek weight, eat butter. If you seek stable tripods, seek carbon \nfiber and give up this bullshit weight fanatacism.\n\nYou're pretty much a putz. I could go on and on, based only on photo.net\npostings. Display ignorance in one forum, and why should one be\nsurprised to see ignorance in another? Sign me...glad to be a moderator\nof photo.net. Wish I were here, too. Why do you go to so much bother\nto demonstrate the fact that you don't know what the hell you're talking\nabout?\n\nHere's a photo.net example:\n\n\"Do I have the right to photograph in non-public places?\"\n\nThe very subject line displays your ignorance. OF COURSE YOU DON'T.\nNot by default. By definition, a private owner of an enclosed space\nlike the museum in question owns that space. Your lack of respect for\nthat authority displays selfishness. You're similarly selfish in regard\nto PG.\n\nAs long as rules on photography, etc, are uniformly stated and enforced, in\nEnglish-derived legal systems you don't have a limp d... to stand on.\n\n\"The other day I went to a special museum exhibit of ancient artifacts. I paid\nabout $AUD 20 to get in. \n\nI pulled out my camera and started taking a few photos of stuff, whereupon\none of the attendants chastised me and said photography wasn't allowed. I\nwas not using flash\"\n\nHmmm...not using flash. So what? The issue is whether or not you can\nphotograph.\n\n\"because I know sometimes items can be damaged by excess light.\"\n\nWhich, in the case of flash has been totally debunked, though some museums\nstill use it as an excuse to avoid arguing over whether or not a private\nvenue is subject to public property access laws. So not only are you \nsadly misinformed about law, but you appear to be sadly misinformed about\nthe effect of electronic flash on art.\n\n\"On the way out, I enquired about why I couldn't photograph. They said it\nwas a condition of the owner of the artifacts and was probably because they\nhold \"copyright\" on the items.\"\n\nOh my gosh, so the person buying these things who wants to let the public\nview them therefore abrogates all right to any image taken by a visitor?\n\nJust because Chris is a self-centered, selfish man? Theft is your RIGHT?\n\nGag me. \n\nOK, an apology to the forum. Chris is a pain in the butt in the photo\nforum I moderate, shows little common sense nor most particularly a sense\nof community, is selfish and resents law when it suggests he can't do each\nand every thing he might want to do in life.\n\nI shouldn't bring this up but I'm pretty much tired of this discussion, and\nhe's tired me in the past in the photo forum I help moderate. I was nice\nthere, didn't spank him in public, and now feel like I'm suffering over\nhere for my lack of diligence.\n\n(paraphrases follow)\n\n\"I should get to photograph these artifacts even if they're\nowned by someone else and even if they're being shown in a private forum\".\n\n\"You guys should make sure that the optimizer doesn't cause my BROKEN code\nto not \"work\", even though it doesn't really work today\"\n\n\"Let's change how inheritance etc. works in a way that fits my personal\nprejudice, regardless of how the rest of the world might view the issue\"\n\nAnd, yes, I'm being petty and vindicative but since you're so insistent\non being a total *bleeping* idiot, why not? Give it up! NO ONE\nagrees with you. \n\n(I'm still being polite, want to push me?)\n\nIf you don't want SQL to be SQL, write your own query language and\nbuild it on PG. Convince the world that you're right, and you'll\nbe a very rich man.\n\nNo one is stopping you. Distribute it as a rival copy. You can\neven incorporate each and every enhancement and bug fix that comes\nalong.\n\nSince you own the one and only better-mouse-trap-ideal, you'll kick\nour ass and we'll fade into oblivion.\n\nIt's a given, right? \n\nOh, and while you're at it, finance your own museum and let me in \nto shoot and sell images resulting from my visit to my heart's desire,\nall for free...I'm holding my breath, man.\n\n(for those of you who don't know it, I actually make part of my living\nas a freelance photographer, with a wide range of national [US] credits.\nDespite this, I would NEVER consider questioning a private museum's right\nto control photograher access to its exhibits. Nor my home, for that\nmatter).\n\nChris, you're an exceedingly selfish man. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 14 Feb 2000 21:08:23 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\nDon, you are one fucking son of a bitch to bring up things I've said\non photo.net here on this forum. Sure I've said some pretty dumb things\nin the past, who hasn't. But for you to bring this thing up from years\nago into a completely different forum... Well you're petulent child.\nDon't bother commenting on anything I write, or communicating with me\nagain, because I won't read it.\n\nDon Baccus wrote:\n> \n> At 02:25 PM 2/15/00 +1100, Chris Bitmead wrote:\n> \n> >I've heard no-one say that offset is meaningful or in any sense\n> >useful in the absense of order. If it means something please\n> >enlighten us. If not, try reading before posting.\n> \n> Actually the \"limit 1 offset 1\" example for uniqueness DID actually\n> give a meaningful hack on the usefulness of lack of order.\n> \n> The basic problem, Chris, is that you want to rape the optimizer\n> in order to stroke ...\n> \n> well...I'll be nice for one more post.\n> \n> But...I'm losing patience.\n> \n> Hell...push me and I'll just start deleting your questions and answers\n> on photo.net. After all, you don't understand that weight isn't the only\n> parameter that contributes to the stability of tripod support...\"I'm\n> leery of Gitzo carbon fiber tripods because I seek WEIGHT!\". If you\n> seek weight, eat butter. If you seek stable tripods, seek carbon\n> fiber and give up this bullshit weight fanatacism.\n> \n> You're pretty much a putz. I could go on and on, based only on photo.net\n> postings. Display ignorance in one forum, and why should one be\n> surprised to see ignorance in another? Sign me...glad to be a moderator\n> of photo.net. Wish I were here, too. Why do you go to so much bother\n> to demonstrate the fact that you don't know what the hell you're talking\n> about?\n> \n> Here's a photo.net example:\n> \n> \"Do I have the right to photograph in non-public places?\"\n> \n> The very subject line displays your ignorance. OF COURSE YOU DON'T.\n> Not by default. By definition, a private owner of an enclosed space\n> like the museum in question owns that space. Your lack of respect for\n> that authority displays selfishness. You're similarly selfish in regard\n> to PG.\n> \n> As long as rules on photography, etc, are uniformly stated and enforced, in\n> English-derived legal systems you don't have a limp d... to stand on.\n> \n> \"The other day I went to a special museum exhibit of ancient artifacts. I paid\n> about $AUD 20 to get in.\n> \n> I pulled out my camera and started taking a few photos of stuff, whereupon\n> one of the attendants chastised me and said photography wasn't allowed. I\n> was not using flash\"\n> \n> Hmmm...not using flash. So what? The issue is whether or not you can\n> photograph.\n> \n> \"because I know sometimes items can be damaged by excess light.\"\n> \n> Which, in the case of flash has been totally debunked, though some museums\n> still use it as an excuse to avoid arguing over whether or not a private\n> venue is subject to public property access laws. So not only are you\n> sadly misinformed about law, but you appear to be sadly misinformed about\n> the effect of electronic flash on art.\n> \n> \"On the way out, I enquired about why I couldn't photograph. They said it\n> was a condition of the owner of the artifacts and was probably because they\n> hold \"copyright\" on the items.\"\n> \n> Oh my gosh, so the person buying these things who wants to let the public\n> view them therefore abrogates all right to any image taken by a visitor?\n> \n> Just because Chris is a self-centered, selfish man? Theft is your RIGHT?\n> \n> Gag me.\n> \n> OK, an apology to the forum. Chris is a pain in the butt in the photo\n> forum I moderate, shows little common sense nor most particularly a sense\n> of community, is selfish and resents law when it suggests he can't do each\n> and every thing he might want to do in life.\n> \n> I shouldn't bring this up but I'm pretty much tired of this discussion, and\n> he's tired me in the past in the photo forum I help moderate. I was nice\n> there, didn't spank him in public, and now feel like I'm suffering over\n> here for my lack of diligence.\n> \n> (paraphrases follow)\n> \n> \"I should get to photograph these artifacts even if they're\n> owned by someone else and even if they're being shown in a private forum\".\n> \n> \"You guys should make sure that the optimizer doesn't cause my BROKEN code\n> to not \"work\", even though it doesn't really work today\"\n> \n> \"Let's change how inheritance etc. works in a way that fits my personal\n> prejudice, regardless of how the rest of the world might view the issue\"\n> \n> And, yes, I'm being petty and vindicative but since you're so insistent\n> on being a total *bleeping* idiot, why not? Give it up! NO ONE\n> agrees with you.\n> \n> (I'm still being polite, want to push me?)\n> \n> If you don't want SQL to be SQL, write your own query language and\n> build it on PG. Convince the world that you're right, and you'll\n> be a very rich man.\n> \n> No one is stopping you. Distribute it as a rival copy. You can\n> even incorporate each and every enhancement and bug fix that comes\n> along.\n> \n> Since you own the one and only better-mouse-trap-ideal, you'll kick\n> our ass and we'll fade into oblivion.\n> \n> It's a given, right?\n> \n> Oh, and while you're at it, finance your own museum and let me in\n> to shoot and sell images resulting from my visit to my heart's desire,\n> all for free...I'm holding my breath, man.\n> \n> (for those of you who don't know it, I actually make part of my living\n> as a freelance photographer, with a wide range of national [US] credits.\n> Despite this, I would NEVER consider questioning a private museum's right\n> to control photograher access to its exhibits. Nor my home, for that\n> matter).\n> \n> Chris, you're an exceedingly selfish man.\n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n",
"msg_date": "Tue, 15 Feb 2000 16:52:22 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Folks, this type of behavour is being taken care of in a private manner.\nIt is being addressed.\n\n---------------------------------------------------------------------------\n\n> Don, you are one fucking son of a bitch to bring up things I've said\n> on photo.net here on this forum. Sure I've said some pretty dumb things\n> in the past, who hasn't. But for you to bring this thing up from years\n> ago into a completely different forum... Well you're petulent child.\n> Don't bother commenting on anything I write, or communicating with me\n> again, because I won't read it.\n> \n> Don Baccus wrote:\n> > \n> > At 02:25 PM 2/15/00 +1100, Chris Bitmead wrote:\n> > \n> > >I've heard no-one say that offset is meaningful or in any sense\n> > >useful in the absense of order. If it means something please\n> > >enlighten us. If not, try reading before posting.\n> > \n> > Actually the \"limit 1 offset 1\" example for uniqueness DID actually\n> > give a meaningful hack on the usefulness of lack of order.\n> > \n> > The basic problem, Chris, is that you want to rape the optimizer\n> > in order to stroke ...\n> > \n> > well...I'll be nice for one more post.\n> > \n> > But...I'm losing patience.\n> > \n> > Hell...push me and I'll just start deleting your questions and answers\n> > on photo.net. After all, you don't understand that weight isn't the only\n> > parameter that contributes to the stability of tripod support...\"I'm\n> > leery of Gitzo carbon fiber tripods because I seek WEIGHT!\". If you\n> > seek weight, eat butter. If you seek stable tripods, seek carbon\n> > fiber and give up this bullshit weight fanatacism.\n> > \n> > You're pretty much a putz. I could go on and on, based only on photo.net\n> > postings. Display ignorance in one forum, and why should one be\n> > surprised to see ignorance in another? Sign me...glad to be a moderator\n> > of photo.net. Wish I were here, too. Why do you go to so much bother\n> > to demonstrate the fact that you don't know what the hell you're talking\n> > about?\n> > \n> > Here's a photo.net example:\n> > \n> > \"Do I have the right to photograph in non-public places?\"\n> > \n> > The very subject line displays your ignorance. OF COURSE YOU DON'T.\n> > Not by default. By definition, a private owner of an enclosed space\n> > like the museum in question owns that space. Your lack of respect for\n> > that authority displays selfishness. You're similarly selfish in regard\n> > to PG.\n> > \n> > As long as rules on photography, etc, are uniformly stated and enforced, in\n> > English-derived legal systems you don't have a limp d... to stand on.\n> > \n> > \"The other day I went to a special museum exhibit of ancient artifacts. I paid\n> > about $AUD 20 to get in.\n> > \n> > I pulled out my camera and started taking a few photos of stuff, whereupon\n> > one of the attendants chastised me and said photography wasn't allowed. I\n> > was not using flash\"\n> > \n> > Hmmm...not using flash. So what? The issue is whether or not you can\n> > photograph.\n> > \n> > \"because I know sometimes items can be damaged by excess light.\"\n> > \n> > Which, in the case of flash has been totally debunked, though some museums\n> > still use it as an excuse to avoid arguing over whether or not a private\n> > venue is subject to public property access laws. So not only are you\n> > sadly misinformed about law, but you appear to be sadly misinformed about\n> > the effect of electronic flash on art.\n> > \n> > \"On the way out, I enquired about why I couldn't photograph. They said it\n> > was a condition of the owner of the artifacts and was probably because they\n> > hold \"copyright\" on the items.\"\n> > \n> > Oh my gosh, so the person buying these things who wants to let the public\n> > view them therefore abrogates all right to any image taken by a visitor?\n> > \n> > Just because Chris is a self-centered, selfish man? Theft is your RIGHT?\n> > \n> > Gag me.\n> > \n> > OK, an apology to the forum. Chris is a pain in the butt in the photo\n> > forum I moderate, shows little common sense nor most particularly a sense\n> > of community, is selfish and resents law when it suggests he can't do each\n> > and every thing he might want to do in life.\n> > \n> > I shouldn't bring this up but I'm pretty much tired of this discussion, and\n> > he's tired me in the past in the photo forum I help moderate. I was nice\n> > there, didn't spank him in public, and now feel like I'm suffering over\n> > here for my lack of diligence.\n> > \n> > (paraphrases follow)\n> > \n> > \"I should get to photograph these artifacts even if they're\n> > owned by someone else and even if they're being shown in a private forum\".\n> > \n> > \"You guys should make sure that the optimizer doesn't cause my BROKEN code\n> > to not \"work\", even though it doesn't really work today\"\n> > \n> > \"Let's change how inheritance etc. works in a way that fits my personal\n> > prejudice, regardless of how the rest of the world might view the issue\"\n> > \n> > And, yes, I'm being petty and vindicative but since you're so insistent\n> > on being a total *bleeping* idiot, why not? Give it up! NO ONE\n> > agrees with you.\n> > \n> > (I'm still being polite, want to push me?)\n> > \n> > If you don't want SQL to be SQL, write your own query language and\n> > build it on PG. Convince the world that you're right, and you'll\n> > be a very rich man.\n> > \n> > No one is stopping you. Distribute it as a rival copy. You can\n> > even incorporate each and every enhancement and bug fix that comes\n> > along.\n> > \n> > Since you own the one and only better-mouse-trap-ideal, you'll kick\n> > our ass and we'll fade into oblivion.\n> > \n> > It's a given, right?\n> > \n> > Oh, and while you're at it, finance your own museum and let me in\n> > to shoot and sell images resulting from my visit to my heart's desire,\n> > all for free...I'm holding my breath, man.\n> > \n> > (for those of you who don't know it, I actually make part of my living\n> > as a freelance photographer, with a wide range of national [US] credits.\n> > Despite this, I would NEVER consider questioning a private museum's right\n> > to control photograher access to its exhibits. Nor my home, for that\n> > matter).\n> > \n> > Chris, you're an exceedingly selfish man.\n> > \n> > - Don Baccus, Portland OR <[email protected]>\n> > Nature photos, on-line guides, Pacific Northwest\n> > Rare Bird Alert Service and other goodies at\n> > http://donb.photo.net.\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 01:04:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Folks, this type of behavour is being taken care of in a private manner.\n> It is being addressed.\n\nApologies guys. I'm afraid I lost my cool. Sorry.\n",
"msg_date": "Tue, 15 Feb 2000 17:08:36 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> That's my feeling too. I'm leaning towards letting the optimizer do the\n>> best it can with the given query (which means using OFFSET+LIMIT as the\n>> estimated number of tuples to be fetched), \n\n> What about cursors ?\n> I heard from Jan that we could specify 'LIMIT ALL' to tell optimizer that\n> the response to get first rows is needed.\n\nHmm. Right now I have it coded to treat 'LIMIT ALL' the same as\nno LIMIT clause, which is the way it ought to work AFAICS.\n\nDECLARE CURSOR doesn't appear to support OFFSET/LIMIT at all (the\ngrammar will take the clause, but analyze.c throws it away...).\n\nI have the LIMIT support in the planner coded to build plans for\nDECLARE CURSOR queries on the assumption that 10% of the rows will\nbe fetched, which is the sort of compromise that will satisfy\nnobody ;-).\n\nA possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\nbeing simply a hint to the optimizer about how much of the query\nresult will actually get fetched. I think we could do that by\ntweaking analyze.c to pass through the clauses the same as it does\nfor regular select, and have the planner discard the clauses after\nit's done using them. (We don't want them to get to the executor\nand interfere with the actual behavior of FETCH commands, but I\ndon't see a reason why they can't live to reach the planner...)\n\nComments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 01:30:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> That's my feeling too. I'm leaning towards letting the\n> optimizer do the\n> >> best it can with the given query (which means using OFFSET+LIMIT as the\n> >> estimated number of tuples to be fetched),\n>\n> > What about cursors ?\n> > I heard from Jan that we could specify 'LIMIT ALL' to tell\n> optimizer that\n> > the response to get first rows is needed.\n>\n> Hmm. Right now I have it coded to treat 'LIMIT ALL' the same as\n> no LIMIT clause, which is the way it ought to work AFAICS.\n>\n> DECLARE CURSOR doesn't appear to support OFFSET/LIMIT at all (the\n> grammar will take the clause, but analyze.c throws it away...).\n>\n> I have the LIMIT support in the planner coded to build plans for\n> DECLARE CURSOR queries on the assumption that 10% of the rows will\n> be fetched, which is the sort of compromise that will satisfy\n> nobody ;-).\n>\n\nProbably your change would work well in most cases.\nIt's nice.\nHowever it seems more preferable to be able to select first/all rows hint.\n\n> A possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\n> being simply a hint to the optimizer about how much of the query\n> result will actually get fetched. I think we could do that by\n> tweaking analyze.c to pass through the clauses the same as it does\n> for regular select, and have the planner discard the clauses after\n> it's done using them. (We don't want them to get to the executor\n> and interfere with the actual behavior of FETCH commands, but I\n> don't see a reason why they can't live to reach the planner...)\n>\n> Comments anyone?\n>\n\nThe following was the reply from Jan 16 months ago.\nUnfortunately PostgreSQL optimizer wasn't able to choose index scan\nfor queires with no qualification at that time.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\nRe: [HACKERS] What about LIMIT in SELECT ? [1998/10/19]\n\nHiroshi Inoue wrote:\n\n> When using cursors,in most cases the response to get first(next) rows\n> is necessary for me,not the throughput.\n> How can we tell PostgreSQL optimzer that the response is necessary ?\n\n With my LIMIT patch, the offset and the row count are part of\n the querytree. And if a LIMIT is given, the limitCount elemet\n of the querytree (a Node *) isn't NULL what it is by default.\n\n When a LIMIT is given, the optimizer could assume that first\n rows is wanted (even if the limit is ALL maybe - but I have\n to think about this some more). And this assumption might let\n it decide to use an index to resolve an ORDER BY even if no\n qualification was given.\n\n Telling the optimizer that first rows wanted in a cursor\n operation would read\n\n DECLARE CURSOR c FOR SELECT * FROM mytab ORDER BY a LIMIT ALL;\n\n\nJan\n\n",
"msg_date": "Tue, 15 Feb 2000 17:06:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 01:30 15/02/00 -0500, Tom Lane wrote:\n>\n>> What about cursors ?\n>> I heard from Jan that we could specify 'LIMIT ALL' to tell optimizer that\n>> the response to get first rows is needed.\n>\n>Hmm. Right now I have it coded to treat 'LIMIT ALL' the same as\n>no LIMIT clause, which is the way it ought to work AFAICS.\n>\n>DECLARE CURSOR doesn't appear to support OFFSET/LIMIT at all (the\n>grammar will take the clause, but analyze.c throws it away...).\n>\n>I have the LIMIT support in the planner coded to build plans for\n>DECLARE CURSOR queries on the assumption that 10% of the rows will\n>be fetched, which is the sort of compromise that will satisfy\n>nobody ;-).\n>\n>A possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\n>being simply a hint to the optimizer about how much of the query\n>result will actually get fetched. \n\nThis seems a good approach until cursors are fixed. But is there a plan to\nmake cursors support LIMIT properly? Do you know why they ignore the LIMIT\nclause?\n\nOr am I missing something?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 15 Feb 2000 19:08:09 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "Philip Warner <[email protected]> writes:\n>> A possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\n>> being simply a hint to the optimizer about how much of the query\n>> result will actually get fetched. \n\n> This seems a good approach until cursors are fixed. But is there a plan to\n> make cursors support LIMIT properly? Do you know why they ignore the LIMIT\n> clause?\n\nShould they obey LIMIT? MOVE/FETCH seems like a considerably more\nflexible interface, so I'm not quite sure why anyone would want to\nuse LIMIT in a cursor.\n\nStill, it seems kind of inconsistent that cursors ignore LIMIT.\nI don't know for sure why it was done that way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 10:43:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "At 10:43 16/02/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>>> A possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\n>>> being simply a hint to the optimizer about how much of the query\n>>> result will actually get fetched. \n>\n>> This seems a good approach until cursors are fixed. But is there a plan to\n>> make cursors support LIMIT properly? Do you know why they ignore the LIMIT\n>> clause?\n>\n>Should they obey LIMIT? MOVE/FETCH seems like a considerably more\n>flexible interface, so I'm not quite sure why anyone would want to\n>use LIMIT in a cursor.\n\nI agree; but see below.\n\n\n>Still, it seems kind of inconsistent that cursors ignore LIMIT.\n>I don't know for sure why it was done that way.\n\nIt's the inconsistency that bothers me: if I run a SELECT statement, then\nput it in a cursor, I should get the same rows returned. Ths current\nbehaviour should probably be considered a bug.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 17 Feb 2000 09:34:43 +1100",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Solution for LIMIT cost estimation "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> Philip Warner <[email protected]> writes:\n> >> A possible answer is to define OFFSET/LIMIT in DECLARE CURSOR as\n> >> being simply a hint to the optimizer about how much of the query\n> >> result will actually get fetched.\n>\n> > This seems a good approach until cursors are fixed. But is\n> there a plan to\n> > make cursors support LIMIT properly? Do you know why they\n> ignore the LIMIT\n> > clause?\n>\n> Should they obey LIMIT? MOVE/FETCH seems like a considerably more\n> flexible interface, so I'm not quite sure why anyone would want to\n> use LIMIT in a cursor.\n>\n\nYou are right.\nWhat I want is to tell optimizer the hint whether all_rows(total throughput)\nis needed or first_rows(constant response time) is needed.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 17 Feb 2000 08:51:21 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Solution for LIMIT cost estimation "
}
] |
[
{
"msg_contents": "Hi \nI managed to build it using an empty config.h file as\nyou suggested. I had to comment out parts of some\nheader files where memmove is #defined to bcopy. \nNow I have to build my code with the library, and see\nif it works.\nThanks!\nRini\nps : I have not tried to get the lib files from the\nexisting dll (v6.5.1) since I preferred to 'make' it.\n\n--- Magnus Hagander <[email protected]> wrote:\n> You will need to copy \"config.h.win32\" to \"config.h\"\n> in the include\n> directory.\n> \n> I think this patch to the docs should be what is\n> needed.\n> \n> *** install-win32.sgml.orig Thu Feb 10 16:21:25\n> 2000\n> --- install-win32.sgml Thu Feb 10 16:22:49 2000\n> ***************\n> *** 20,27 ****\n> \n> <Para>\n> To build the libraries, change directory into the\n> <filename>src</filename>\n> ! directory, and type the command\n> <programlisting>\n> nmake /f win32.mak\n> </programlisting>\n> This assumes that you have <ProductName>Visual\n> C++</ProductName> in your\n> --- 20,28 ----\n> \n> <Para>\n> To build the libraries, change directory into the\n> <filename>src</filename>\n> ! directory, and type the commands\n> <programlisting>\n> + copy include\\config.h.win32 include\\config.h\n> nmake /f win32.mak\n> </programlisting>\n> This assumes that you have <ProductName>Visual\n> C++</ProductName> in your\n> \n> \n> \n> \n> Hmm. I just realised that that is for the current\n> version, not 6.5.3.\n> However, you will need something like it - I'm\n> afraid I don't remember\n> exactly what. Try either with the config.h.win32\n> from -current, or simply\n> try with an empty config.h.\n> \n> //Magnus\n> \n\n__________________________________________________\nDo You Yahoo!?\nTalk to your friends online with Yahoo! Messenger.\nhttp://im.yahoo.com\n",
"msg_date": "Thu, 10 Feb 2000 09:12:54 -0800 (PST)",
"msg_from": "Rini Dutta <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] how to make libpq on winnt using the 'win32.mak's"
}
] |
[
{
"msg_contents": "I have a question about the make_ctags script in the src/tools\ndirectory. What are the -d and -t flags to ctags supposed to do?\nMy version of ctags:\n\nwallace$ ctags --version\nExuberant Ctags 3.2.4, by Darren Hiebert <[email protected]>\n\nDoesn't recognize them.\n\nOh, and one little fix: The sym-link generator puts 'tags' symlinks\nin the CVS directories. Patch attached.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005",
"msg_date": "Thu, 10 Feb 2000 12:21:48 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "make_ctags script"
},
{
"msg_contents": "> I have a question about the make_ctags script in the src/tools\n> directory. What are the -d and -t flags to ctags supposed to do?\n> My version of ctags:\n> \n> wallace$ ctags --version\n> Exuberant Ctags 3.2.4, by Darren Hiebert <[email protected]>\n\nBSD ctags has:\n\n -d create tags for #defines that don't take arguments; #defines that\n take arguments are tagged automatically.\n\n -t create tags for typedefs, structs, unions, and enums.\n\n> \n> Doesn't recognize them.\n> \n> Oh, and one little fix: The sym-link generator puts 'tags' symlinks\n> in the CVS directories. Patch attached.\n\nApplied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 10 Feb 2000 13:34:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make_ctags script"
},
{
"msg_contents": "On Thu, Feb 10, 2000 at 01:34:48PM -0500, Bruce Momjian wrote:\n> > I have a question about the make_ctags script in the src/tools\n> > directory. What are the -d and -t flags to ctags supposed to do?\n> > My version of ctags:\n> > \n> > wallace$ ctags --version\n> > Exuberant Ctags 3.2.4, by Darren Hiebert <[email protected]>\n> \n> BSD ctags has:\n> \n> -d create tags for #defines that don't take arguments; #defines that\n> take arguments are tagged automatically.\n> \n> -t create tags for typedefs, structs, unions, and enums.\n\n\nAh, O.K. then: Exuberant Ctags does all these by default. Just wanted\nto make sure I wan't missing anything.\n\nSlightly off topic: While going through the current source, comparing\nto Mariposa, I've seen little traces of old code: some ideas seem to\nhave been reinvented and reimplemented a number of times. Going back\nand digging into Postgres95, and the last postquel based release,\npostgres-v4r2, to see how they were implemented, it's been fun seeing\n\"debate/design by comment block\" from some of the original university\ndevelopers. In particular, one block with a 12 line NOOOOOO! in\nbackend/commands/version.c had me rolling on the floor. Ah, I see it lives\non in _deadcode. In general, the functional comments in the current code\nare more informative, but not as much fun ;-)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Thu, 10 Feb 2000 15:03:35 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make_ctags script"
},
{
"msg_contents": "> Slightly off topic: While going through the current source, comparing\n> to Mariposa, I've seen little traces of old code: some ideas seem to\n> have been reinvented and reimplemented a number of times. Going back\n> and digging into Postgres95, and the last postquel based release,\n> postgres-v4r2, to see how they were implemented, it's been fun seeing\n> \"debate/design by comment block\" from some of the original university\n> developers. In particular, one block with a 12 line NOOOOOO! in\n> backend/commands/version.c had me rolling on the floor. Ah, I see it lives\n> on in _deadcode. In general, the functional comments in the current code\n> are more informative, but not as much fun ;-)\n\nWhat I have realized is how superior some of our code is to the old\nstuff.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Jun 2000 08:19:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make_ctags script"
}
] |
[
{
"msg_contents": "Having taken the discussion under consideration I will, barring protests\nand late good ideas, do the following:\n\n-e will echo queries sent to backend (as in 6.*)\n-a will echo lines from the script literally\n-n will be deprecated\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 10 Feb 2000 21:13:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql options"
}
] |
[
{
"msg_contents": "I've got most of the regression tests running, but one of the rules\ntests has uncovered a problem in my code, at least for a query\ninvolving a merge join.\n\nCould someone run a \"-d 99\" query using the following from the\nregression test (rules.sql):\n\nselect rtest_t2.a, rtest_t3.b\n from rtest_t2, rtest_t3\n where rtest_t2.a = rtest_t3.a;\n\nand send me the query, the rewritten query, and the plan emitted by\nthe backend (it should be a MERGEJOIN plan)? It might speed up my\nrummaging around for the reason for the failure :(\n\nAnother possibility is that I submit/commit my patches (there are\nquite a few files touched and I *really* want to get them off of my\nsystem and into the tree soon) but I was a bit hesitant to commit\nsomething with a known problem of this nature.\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 11 Feb 2000 06:10:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Could someone run a \"-d 99\" query using the following from the\n> regression test (rules.sql):\n> select rtest_t2.a, rtest_t3.b\n> from rtest_t2, rtest_t3\n> where rtest_t2.a = rtest_t3.a;\n> and send me the query, the rewritten query, and the plan emitted by\n> the backend (it should be a MERGEJOIN plan)? It might speed up my\n> rummaging around for the reason for the failure :(\n\nThis doesn't look very detailed, is it really what you wanted?\n\nStartTransactionCommand\nquery: explain\nselect rtest_t2.a, rtest_t3.b\n from rtest_t2, rtest_t3\n where rtest_t2.a = rtest_t3.a\nparser outputs:\n\n{ QUERY :command 5 :utility ? :resultRelation 0 :into <> :isPortal false :isBinary false :isTemp false :unionall false :distinctClause <> :sortClause <> :rtable <> :targetlist <> :qual <> :groupClause <> :havingQual <> :hasAggs false :hasSubLinks false :unionClause <> :intersectClause <> :limitOffset <> :limitCount <> :rowMark <>}\n\nafter rewriting:\n{ QUERY \n :command 5 \n :utility ? \n :resultRelation 0 \n :into <> \n :isPortal false \n :isBinary false \n :isTemp false \n :unionall false \n :distinctClause <> \n :sortClause <> \n :rtable <> \n :targetlist <> \n :qual <> \n :groupClause <> \n :havingQual <> \n :hasAggs false \n :hasSubLinks false \n :unionClause <> \n :intersectClause <> \n :limitOffset <> \n :limitCount <> \n :rowMark <>\n }\n\nProcessUtility: explain\nselect rtest_t2.a, rtest_t3.b\n from rtest_t2, rtest_t3\n where rtest_t2.a = rtest_t3.a\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=164.66 rows=10000 width=12)\n -> Sort (cost=69.83 rows=1000 width=8)\n -> Seq Scan on rtest_t3 (cost=20.00 rows=1000 width=8)\n -> Sort (cost=69.83 rows=1000 width=4)\n -> Seq Scan on rtest_t2 (cost=20.00 rows=1000 width=4)\n\nCommitTransactionCommand\n\n> Another possibility is that I submit/commit my patches (there are\n> quite a few files touched and I *really* want to get them off of my\n> system and into the tree soon) but I was a bit hesitant to commit\n> something with a known problem of this nature.\n\nAny changes in backend/optimizer/ ? I've got a bunch of uncommitted\nchanges there myself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 01:41:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> > Could someone run a \"-d 99\" query using the following from the\n> > regression test (rules.sql):\n> This doesn't look very detailed, is it really what you wanted?\n\nHmm. I expected to get a full plan (labeled \"plan:\"). Did you do the\nquery or just an \"explain\"?\n\nI'm compiling this way, though I don't think that it matters for this:\n\n$ gcc -I../../include -I../../backend -O2 -m486 -O2 -g -O0\n-DUSE_ASSERT_CHECKING -DENABLE_OUTER_JOINS -DEXEC_MERGEJOINDEBUG -Wall\n-Wmissing-prototypes -I.. -c copyfuncs.c -o copyfuncs.o\n\n> Any changes in backend/optimizer/ ? I've got a bunch of uncommitted\n> changes there myself.\n\nNot too much. Though I've got a null pointer problem in executor for\nmergejoins and I'm not certain where it is coming from. Here are the\nfiles which have changed in the optimizer/ tree:\n\n[postgres@golem optimizer]$ cvs -q update .\nM prep/prepunion.c\nM util/clauses.c\n\nThe changes are minor; I'm pretty sure I can remerge if you want to\ncommit your stuff (at least if your stuff is isolated to the\nbackend/optimizer/ part of the tree).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 11 Feb 2000 07:16:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> This doesn't look very detailed, is it really what you wanted?\n\n> Hmm. I expected to get a full plan (labeled \"plan:\"). Did you do the\n> query or just an \"explain\"?\n\nI'm sorry, just did the \"explain\". Is this any better?\n\nStartTransactionCommand\nquery: select rtest_t2.a, rtest_t3.b\n from rtest_t2, rtest_t3\n where rtest_t2.a = rtest_t3.a\nparser outputs:\n\n{ QUERY :command 1 :utility <> :resultRelation 0 :into <> :isPortal false :isBinary false :isTemp false :unionall false :distinctClause <> :sortClause <> :rtable ({ RTE :relname rtest_t2 :refname rtest_t2 :relid 404330 :inh false :inFromCl true :inJoinSet true :skipAcl false} { RTE :relname rtest_t3 :refname rtest_t3 :relid 404340 :inh false :inFromCl true :inJoinSet true :skipAcl false}) :targetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1 :resname a :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname b :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 2}}) :qual { EXPR :typeOid 16 :opType op :oper { OPER :opno 96 :opid 0 :opresulttype 16 } :args ({ VAR :varno 1 :v!\n!\narattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1} { VAR :varno 2 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 1})} :groupClause <> :havingQual <> :hasAggs false :hasSubLinks false :unionClause <> :intersectClause <> :limitOffset <> :limitCount <> :rowMark <>}\n\nafter rewriting:\n{ QUERY \n :command 1 \n :utility <> \n :resultRelation 0 \n :into <> \n :isPortal false \n :isBinary false \n :isTemp false \n :unionall false \n :distinctClause <> \n :sortClause <> \n :rtable (\n { RTE \n :relname rtest_t2 \n :refname rtest_t2 \n :relid 404330 \n :inh false \n :inFromCl true \n :inJoinSet true \n :skipAcl false\n }\n \n { RTE \n :relname rtest_t3 \n :refname rtest_t3 \n :relid 404340 \n :inh false \n :inFromCl true \n :inJoinSet true \n :skipAcl false\n }\n )\n \n :targetlist (\n { TARGETENTRY \n :resdom \n { RESDOM \n :resno 1 \n :restype 23 \n :restypmod -1 \n :resname a \n :reskey 0 \n :reskeyop 0 \n :ressortgroupref 0 \n :resjunk false \n }\n \n :expr \n { VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n }\n \n { TARGETENTRY \n :resdom \n { RESDOM \n :resno 2 \n :restype 23 \n :restypmod -1 \n :resname b \n :reskey 0 \n :reskeyop 0 \n :ressortgroupref 0 \n :resjunk false \n }\n \n :expr \n { VAR \n :varno 2 \n :varattno 2 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 2 \n :varoattno 2\n }\n }\n )\n \n :qual \n { EXPR \n :typeOid 16 \n :opType op \n :oper \n { OPER \n :opno 96 \n :opid 0 \n :opresulttype 16 \n }\n \n :args (\n { VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n \n { VAR \n :varno 2 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 2 \n :varoattno 1\n }\n )\n }\n \n :groupClause <> \n :havingQual <> \n :hasAggs false \n :hasSubLinks false \n :unionClause <> \n :intersectClause <> \n :limitOffset <> \n :limitCount <> \n :rowMark <>\n }\n\nplan:\n\n{ MERGEJOIN :cost 164.658 :rows 10000 :width 12 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1 :resname a :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 65000 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname b :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 65001 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 2}}) :qpqual <> :lefttree { SORT :cost 69.8289 :rows 1000 :width 8 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1 :resname <> :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 2}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname <> :reskey 1 :reskeyop 66 :ressortg!\n!\nroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 1}}) :qpqual <> :lefttree { SEQSCAN :cost 20 :rows 1000 :width 8 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1 :resname <> :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 2}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname <> :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 2 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 1}}) :qpqual <> :lefttree <> :righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 2 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0 :nonameid 0 :keycount 1 } :righttree { SORT :cost 69.8289 :rows 1000 :width 4 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :!\n!\nrestype 23 :restypmod -1 :resname <> :reskey 1 :reskeyop 66 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}}) :qpqual <> :lefttree { SEQSCAN :cost 20 :rows 1000 :width 4 :state <> :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 23 :restypmod -1 :resname <> :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}}) :qpqual <> :lefttree <> :righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0 :nonameid 0 :keycount 1 } :extprm () :locprm () :initplan <> :nprm 0 :mergeclauses ({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 } :args ({ VAR :varno 65001 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 2 :varoattno 1} { VAR :varno 65000 :varattno 1 :vartype 2!\n!\n3 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1})})}\n\nProcessQuery\nCommitTransactionCommand\n\n>> Any changes in backend/optimizer/ ? I've got a bunch of uncommitted\n>> changes there myself.\n\n> Not too much. Though I've got a null pointer problem in executor for\n> mergejoins and I'm not certain where it is coming from.\n\nCould easy be a planner shortcoming. Maybe you should commit so we\ncan get more eyeballs on the problem.\n\n> Here are the files which have changed in the optimizer/ tree:\n\n> [postgres@golem optimizer]$ cvs -q update .\n> M prep/prepunion.c\n> M util/clauses.c\n\n> The changes are minor; I'm pretty sure I can remerge if you want to\n> commit your stuff (at least if your stuff is isolated to the\n> backend/optimizer/ part of the tree).\n\nI know I've tromped on your toes in the past weeks, so I'll wait for\nyou to commit and then merge. I have no changes in those two files,\nbut I do have some in the usual-suspect places like nodes/copyfuncs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Feb 2000 02:20:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> > Right. I'm looking forward to advice on the right way to do this. The\n> > problem is that the introductory character for list structures is\n> > *also* the introductory character for plans, so everything blows\n> > chunks if I just call nodeRead() from _readAttr().\n> Huh? '{' introduces a node, '(' introduces a list. See the comments\n> I added (not very long ago :-() in read.c. My guess is that you are\n> either emitting the wrong character or have some sort of error in the\n> way you call nodeRead. Nothing obviously wrong in the patch diffs\n> though.\n\nThe problem I recall is that paren also introduces a \"plan\", and if\nyou call nodeRead() it sees the paren and then complains later because\nit expects a node label following the paren.\n\nI probably misdiagnosed the behavior, but in any case I'd be *really*\nhappy if someone wants to put me out of my misery on this one ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 15 Feb 2000 03:21:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I probably misdiagnosed the behavior, but in any case I'd be *really*\n> happy if someone wants to put me out of my misery on this one ;)\n\nAt least some of your problems are due to confusing list nodes with the\nnodes they point to, as in this example from parse_clause.c:\n\nList *\nListTableAsAttrs(ParseState *pstate, char *table);\nList *\nListTableAsAttrs(ParseState *pstate, char *table)\n{\n\tList *rlist = NULL;\n\tList *col;\n\n\tAttr *attr = expandTable(pstate, table, TRUE);\n\tforeach(col, attr->attrs)\n\t{\n\t\tAttr *a;\n\t\ta = makeAttr(table, strVal((Value *) col));\n\t\trlist = lappend(rlist, a);\n\t}\n\n\treturn rlist;\n}\n\nI tried, but failed, to refrain from remarking about the horrible\nstyle of the function declaration --- either it's static (which\nlooks like the right answer here) or it should be declared in\na header file. The above method of preventing gcc from telling\nyou how horrible your style is is just, well, never mind.\n\nThe more immediate problem is that you want\n\n\t\ta = makeAttr(table, strVal((Value *) lfirst(col)));\n\nI cleaned up a similar error in ruleutils.c, but am too tired to\nfix this one or go digging for more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 03:33:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> The problem I recall is that paren also introduces a \"plan\", and if\n> you call nodeRead() it sees the paren and then complains later because\n> it expects a node label following the paren.\n\n> I probably misdiagnosed the behavior, but in any case I'd be *really*\n> happy if someone wants to put me out of my misery on this one ;)\n\nAh-hah, I see it. nodeRead() expects that simple Value objects\n(T_Integer, T_Float, T_String) will be output without any '{' ... '}'\nwrapping. _outNode() was putting them out with wrapping. Apparently,\nyou're the first person in a long time (maybe forever) to try to dump\nand reload node structures in which these node types appear outside\nthe context of a Const node. (outConst calls outValue directly, without\ngoing through outNode, so the bug didn't appear in that case.)\n\nI've fixed _outNode() to suppress the unwanted wrapper for a Value\nand removed the now-unnecessary special-case code for Attr lists.\n\nBTW, the rule regress test is presently failing because I modified\nruleutils.c to dump the Attr list if it is not null, rather than\nonly if the refname is different from the relname:\n\n*** 992,1008 ****\n quote_identifier(rte->relname),\n inherit_marker(rte));\n if (strcmp(rte->relname, rte->ref->relname) != 0)\n- {\n- List *col;\n appendStringInfo(buf, \" %s\",\n quote_identifier(rte->ref->relname));\n appendStringInfo(buf, \" (\");\n! foreach (col, rte->ref->attrs)\n {\n! if (col != lfirst(rte->ref->attrs))\n appendStringInfo(buf, \", \");\n! appendStringInfo(buf, \"%s\", strVal(col));\n }\n }\n }\n }\n--- 992,1012 ----\n quote_identifier(rte->relname),\n inherit_marker(rte));\n if (strcmp(rte->relname, rte->ref->relname) != 0)\n appendStringInfo(buf, \" %s\",\n quote_identifier(rte->ref->relname));\n+ if (rte->ref->attrs != NIL)\n+ {\n+ List *col;\n+ \n appendStringInfo(buf, \" (\");\n! foreach(col, rte->ref->attrs)\n {\n! if (col != rte->ref->attrs)\n appendStringInfo(buf, \", \");\n! appendStringInfo(buf, \"%s\",\n! quote_identifier(strVal(lfirst(col))));\n }\n+ appendStringInfo(buf, \")\");\n }\n }\n }\n\nWhile this seems like appropriate logic, a bunch of the rule tests are\nnow showing long and perfectly content-free lists of attribute names in\nreverse-listed FROM clauses. This bothers me because it implies that\nthose names are being stored in the querytree that's dumped out to\npg_rewrite, which will be a further crimp in people's ability to write\ncomplex rules. I think we really don't want to store those nodes if we\ndon't have to. Why are we building Attr lists when there's no actual\ncolumn aliasing being done?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 16:02:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> Ah-hah, I see it. nodeRead() expects that simple Value objects\n> (T_Integer, T_Float, T_String) will be output without any '{' ... '}'\n> wrapping. _outNode() was putting them out with wrapping. Apparently,\n> you're the first person in a long time (maybe forever) to try to dump\n> and reload node structures in which these node types appear outside\n> the context of a Const node. (outConst calls outValue directly, without\n> going through outNode, so the bug didn't appear in that case.)\n> I've fixed _outNode() to suppress the unwanted wrapper for a Value\n> and removed the now-unnecessary special-case code for Attr lists.\n\nGreat. Thanks. And I should have committed my garbage earlier rather\nthan trying to make it work poorly ;)\n\n> BTW, the rule regress test is presently failing because I modified\n> ruleutils.c to dump the Attr list if it is not null, rather than\n> only if the refname is different from the relname:\n> \n> *** 992,1008 ****\n> quote_identifier(rte->relname),\n> inherit_marker(rte));\n> if (strcmp(rte->relname, rte->ref->relname) != 0)\n> - {\n> - List *col;\n> appendStringInfo(buf, \" %s\",\n> quote_identifier(rte->ref->relname));\n> appendStringInfo(buf, \" (\");\n> ! foreach (col, rte->ref->attrs)\n> {\n> ! if (col != lfirst(rte->ref->attrs))\n> appendStringInfo(buf, \", \");\n> ! appendStringInfo(buf, \"%s\", strVal(col));\n> }\n> }\n> }\n> }\n> --- 992,1012 ----\n> quote_identifier(rte->relname),\n> inherit_marker(rte));\n> if (strcmp(rte->relname, rte->ref->relname) != 0)\n> appendStringInfo(buf, \" %s\",\n> quote_identifier(rte->ref->relname));\n> + if (rte->ref->attrs != NIL)\n> + {\n> + List *col;\n> +\n> appendStringInfo(buf, \" (\");\n> ! foreach(col, rte->ref->attrs)\n> {\n> ! if (col != rte->ref->attrs)\n> appendStringInfo(buf, \", \");\n> ! appendStringInfo(buf, \"%s\",\n> ! quote_identifier(strVal(lfirst(col))));\n> }\n> + appendStringInfo(buf, \")\");\n> }\n> }\n> }\n\nistm that the column aliases (rte->ref->attrs) should not be written out\nif the table alias (rte->ref->relname) is not written. And the rules\nregression test should be failing anyway, because I didn't update it\nsince I knew that there was something wrong with those plan strings and\nI didn't want to hide that.\n\n> While this seems like appropriate logic, a bunch of the rule tests are\n> now showing long and perfectly content-free lists of attribute names in\n> reverse-listed FROM clauses. This bothers me because it implies that\n> those names are being stored in the querytree that's dumped out to\n> pg_rewrite, which will be a further crimp in people's ability to write\n> complex rules. I think we really don't want to store those nodes if we\n> don't have to. Why are we building Attr lists when there's no actual\n> column aliasing being done?\n\nHmm. Because there are multiple places in the parser which needs to get\nat a column name. When handling column aliases, I was having to look up\nthe actual column names anyway to verify that there were the correct\nnumber of aliases specified (actually, I decided to allow any number of\naliases <= the number of actual columns, filling in with the underlying\ncolumn names if an alias was not specified) and so while I had the info\nI cached it into the RTE structure for later use.\n\nIf I make the ref structure optional, then I have to start returning\nlists of columns when working out the new join syntax, and I hated to\nkeep generating a bunch of temporary lists of things. Also, by making\nthe ref->refname non-optional in the structure, I could stop checking\nfor its existance before using either it *or* the true table name; this\ncleaned up a bit of the code.\n\n - Thomas\n\n-- \nThomas Lockhart\nCaltech/JPL\nInterferometry Systems and Technology\n",
"msg_date": "Tue, 15 Feb 2000 21:38:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": ">> BTW, the rule regress test is presently failing because I modified\n>> ruleutils.c to dump the Attr list if it is not null, rather than\n>> only if the refname is different from the relname:\n\n> istm that the column aliases (rte->ref->attrs) should not be written out\n> if the table alias (rte->ref->relname) is not written.\n\nHmm. If it's not possible to specify column aliases without specifying\na table-name alias, then that's OK ... but I thought table aliases were\noptional.\n\n> And the rules\n> regression test should be failing anyway, because I didn't update it\n> since I knew that there was something wrong with those plan strings and\n> I didn't want to hide that.\n\nThe weird thing is that I'm pretty sure the rules test was *passing*\n(against the present expected file) last night after I made the change\nI just quoted. It wasn't till after I changed the readfuncs/outfuncs\nstuff this morning that I started seeing the long lists of column names\nin the rules output.\n\nOTOH, \"last night\" was about 3AM and I was tired. Maybe I remember it\nwrong.\n\n>> While this seems like appropriate logic, a bunch of the rule tests are\n>> now showing long and perfectly content-free lists of attribute names in\n>> reverse-listed FROM clauses. This bothers me because it implies that\n>> those names are being stored in the querytree that's dumped out to\n>> pg_rewrite, which will be a further crimp in people's ability to write\n>> complex rules. I think we really don't want to store those nodes if we\n>> don't have to. Why are we building Attr lists when there's no actual\n>> column aliasing being done?\n\n> Hmm. Because there are multiple places in the parser which needs to get\n> at a column name. When handling column aliases, I was having to look up\n> the actual column names anyway to verify that there were the correct\n> number of aliases specified (actually, I decided to allow any number of\n> aliases <= the number of actual columns, filling in with the underlying\n> column names if an alias was not specified) and so while I had the info\n> I cached it into the RTE structure for later use.\n\nFair enough, but we don't need those column names any more after the\nparse/analyze phase completes, right? Maybe we could remove the lists\nat that time, or at least do so before writing out rule querytrees.\n\nSince we aren't going to have TOAST in 7.0, I'm concerned that the\nrule representation not get any more verbose than it is already...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 17:13:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> > istm that the column aliases (rte->ref->attrs) should not be written out\n> > if the table alias (rte->ref->relname) is not written.\n> Hmm. If it's not possible to specify column aliases without specifying\n> a table-name alias, then that's OK ... but I thought table aliases were\n> optional.\n\nI don't think so (ie a table alias is required if a column alias is\nspecified), but my SQL books are at home so I can't verify my\nrecollection.\n\n> Fair enough, but we don't need those column names any more after the\n> parse/analyze phase completes, right? Maybe we could remove the lists\n> at that time, or at least do so before writing out rule querytrees.\n\nPossibly. I'm transforming the qualifications on the join clause as the\njoin clause is transformed (rather than later during the WHERE\ntransformation) in the hope that the column (and table) names will have\nbeen replaced by attribute numbers and RTE indices. If that is the case,\nand if the \"correlation names\" or aliases are never needed after that,\nthen we can drop 'em.\n\nExcept that we'll possibly need them to get a valid pg_dump of the\nrules? Or is an untransformed copy of the original definition kept\naround someplace??\n\n> Since we aren't going to have TOAST in 7.0, I'm concerned that the\n> rule representation not get any more verbose than it is already...\n\nRight.\n\n - Thomas\n\n-- \nThomas Lockhart\nCaltech/JPL\nInterferometry Systems and Technology\n",
"msg_date": "Tue, 15 Feb 2000 23:15:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Fair enough, but we don't need those column names any more after the\n>> parse/analyze phase completes, right? Maybe we could remove the lists\n>> at that time, or at least do so before writing out rule querytrees.\n\n> Except that we'll possibly need them to get a valid pg_dump of the\n> rules? Or is an untransformed copy of the original definition kept\n> around someplace??\n\nAs far as I can tell without having tried it, you'd still get a correct\ndump, although it might look different from the original query because\ncolumns would be referred to by their untransformed names (but that'll\nhappen anyway, unless you go back and change ruleutil.c's way of looking\nup column names). For example, with current sources:\n\nregression=# create view qq as select a from tenk1 t1 (a);\nCREATE 276745 1\nregression=# \\d qq\n View \"qq\"\n Attribute | Type | Modifier\n-----------+---------+----------\n a | integer |\nView definition: SELECT t1.unique1 AS a FROM tenk1 t1 (a, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4);\n\nThe only \"external\" view of the alias is as the column title, and notice\nthat that's getting enforced by an AS clause independently of any\naliases. (In the querytree, that title is coming from a refname in the\ntargetlist entry --- we don't need another copy in the RTE to make it\nwork.)\n\nBTW, I'm practically certain that I tried this same example last night\nand got a rule dump of just\n\nSELECT t1.unique1 AS a FROM tenk1 t1 (a);\n\nwhich is more like what I would expect. Did you change the behavior\nw.r.t. adding additional columns to the alias list just recently, like\nsince 11PM EST yesterday?\n\n\t\t\tregards, tom lane\n\nPS: Am I the only one who thinks that column aliases done this way are\nextremely brain-dead? If you write \"FROM table alias (a b c)\" then\nyou've just written a query that depends critically and non-obviously\non which columns are first, second, third in the physical table.\nOne of the few things I know about good SQL style is that you don't\nwrite INSERT without an explicit column list, because such code will\nbreak (possibly without warning) if you insert/delete/rearrange columns\nin the table definition. This alias facility seems to be just another\nmethod of shooting yourself in the foot with that same bullet...\n",
"msg_date": "Tue, 15 Feb 2000 18:58:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> > Except that we'll possibly need them to get a valid pg_dump of the\n> > rules? Or is an untransformed copy of the original definition kept\n> > around someplace??\n> As far as I can tell without having tried it, you'd still get a correct\n> dump, although it might look different from the original query because\n> columns would be referred to by their untransformed names (but that'll\n> happen anyway, unless you go back and change ruleutil.c's way of looking\n> up column names). For example, with current sources:\n> View definition: SELECT t1.unique1 AS a\n> FROM tenk1 t1 (a, unique2, two, four, ten, twenty, hundred,\n> thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4);\n> The only \"external\" view of the alias is as the column title, and notice\n> that that's getting enforced by an AS clause independently of any\n> aliases. (In the querytree, that title is coming from a refname in the\n> targetlist entry --- we don't need another copy in the RTE to make it\n> work.)\n\nWell, there are other queries which *do* rely on the column aliases:\n\n select a, b from t1 ta (a, b, c) natural join t2 tb (a, d);\n\nwhere the column in the target list called \"a\" is not allowed to have an\nexplicit reference to a table name. That is, neither\n\n select t1.a, b from t1 ta (a, b, c) natural join t2 tb (a, d);\n\nnor\n\n select t2.a, b from t1 ta (a, b, c) natural join t2 tb (a, d);\n\nare legal SQL, but, for example,\n\n select a, ta.b from t1 ta (a, b, c) natural join t2 tb (a, d);\n\nis. Not sure how this impacts the rule representation or dump/reload of\nviews.\n\n> BTW, I'm practically certain that I tried this same example last night\n> which is more like what I would expect. Did you change the behavior\n> w.r.t. adding additional columns to the alias list just recently, like\n> since 11PM EST yesterday?\n\nYeah right ;)\n\nI've only committed one set of patches; don't remember what time that\nwas...\n\n> PS: Am I the only one who thinks that column aliases done this way are\n> extremely brain-dead? If you write \"FROM table alias (a b c)\" then\n> you've just written a query that depends critically and non-obviously\n> on which columns are first, second, third in the physical table.\n> One of the few things I know about good SQL style is that you don't\n> write INSERT without an explicit column list, because such code will\n> break (possibly without warning) if you insert/delete/rearrange columns\n> in the table definition. This alias facility seems to be just another\n> method of shooting yourself in the foot with that same bullet...\n\nIt's required for doing complex join syntax, and is allowed for other\njoins as well. But we certainly have got along just fine without it, eh?\n\n - Thomas\n\n-- \nThomas Lockhart\nCaltech/JPL\nInterferometry Systems and Technology\n",
"msg_date": "Wed, 16 Feb 2000 00:46:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "> I tried, but failed, to refrain from remarking about the horrible\n> style of the function declaration --- either it's static (which\n> looks like the right answer here) or it should be declared in\n> a header file. The above method of preventing gcc from telling\n> you how horrible your style is is just, well, never mind.\n\nUh, Tom, it is unused code, and I use this kind of thing while doing\ndevelopment. I did warn about some crufty stuff, and glad you agree :/\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 06:10:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "> >> BTW, the rule regress test is presently failing because I modified\n> >> ruleutils.c to dump the Attr list if it is not null, rather than\n> >> only if the refname is different from the relname:\n\nI'm currently (2000-02-16 15:40 GMT) seeing the rules test\nblank-filling the \"bpchar\" fields. Do you see that?\n\n> > istm that the column aliases (rte->ref->attrs) should not be written out\n> > if the table alias (rte->ref->relname) is not written.\n> Hmm. If it's not possible to specify column aliases without specifying\n> a table-name alias, then that's OK ... but I thought table aliases were\n> optional.\n\nI've just looked it up in the Date book: table aliases are optional in\ngeneral, but column aliases require a table alias. The bnf looks like\n\n table [ [ AS ] range-variable [ ( column-commalist ) ] ]\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 15:48:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm currently (2000-02-16 15:40 GMT) seeing the rules test\n> blank-filling the \"bpchar\" fields. Do you see that?\n\nNo ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 02:28:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> > I'm currently (2000-02-16 15:40 GMT) seeing the rules test\n> > blank-filling the \"bpchar\" fields. Do you see that?\n\nHmm. Still seeing it; here is a snippet from a diff of\nresults/rules.out and expected/rules.out:\n\n...\n< rtest_emp | rtest_emp_ins | CREATE RULE rtest_emp_ins\n AS ON INSERT TO rtest_emp DO\n INSERT INTO rtest_emplog (ename, who, \"action\", newsal, oldsal)\n VALUES (new.ename, getpgusername(),\n 'hired '::bpchar, new.salary, '$0.00'::money);\n...\n> rtest_emp | rtest_emp_ins | CREATE RULE rtest_emp_ins\n AS ON INSERT TO rtest_emp DO\n INSERT INTO rtest_emplog (ename, who, \"action\", newsal, oldsal)\n VALUES (new.ename, getpgusername(),\n 'hired'::bpchar, new.salary, '$0.00'::money);\n...\n\nBut if you are not seeing it, then perhaps my \"make clean install\"\nisn't sufficient; I'll try a clean checkout sometime...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 15:17:34 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've just looked it up in the Date book: table aliases are optional in\n> general, but column aliases require a table alias. The bnf looks like\n\n> table [ [ AS ] range-variable [ ( column-commalist ) ] ]\n\nOK, but that doesn't really solve my concern about rule bloat, because\nif you write \"FROM table alias\", you'll still get a list of column names\nappended to that by the system.\n\nHere is a possible answer that I think would address both our concerns:\nkeep track of how many column aliases the user actually wrote (without\ncounting the Attr-list entries added by the system for its internal\nconvenience), and dump only the user-supplied aliases in rule strings.\nThis'd be easy enough to do with an extra field in Attr nodes.\nIt'd not only preserve compactness in the cases we previously handled,\nbut it'd make the reverse-listed display of rules more like the original\nquery in cases where the user did write some aliases (but not a full\nset).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 10:57:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> I'm currently (2000-02-16 15:40 GMT) seeing the rules test\n>>>> blank-filling the \"bpchar\" fields. Do you see that?\n\n> Hmm. Still seeing it; here is a snippet from a diff of\n> results/rules.out and expected/rules.out:\n\n> ...\n> < rtest_emp | rtest_emp_ins | CREATE RULE rtest_emp_ins\n> AS ON INSERT TO rtest_emp DO\n> INSERT INTO rtest_emplog (ename, who, \"action\", newsal, oldsal)\n> VALUES (new.ename, getpgusername(),\n> 'hired '::bpchar, new.salary, '$0.00'::money);\n> ...\n>> rtest_emp | rtest_emp_ins | CREATE RULE rtest_emp_ins\n> AS ON INSERT TO rtest_emp DO\n> INSERT INTO rtest_emplog (ename, who, \"action\", newsal, oldsal)\n> VALUES (new.ename, getpgusername(),\n> 'hired'::bpchar, new.salary, '$0.00'::money);\n> ...\n\nOh, I'm sorry, I *am* seeing that. I don't think this has anything\nto do with your changes; the system's been producing pre-padded\nstrings in those tests for a while now, at least on good days ;-).\nIf you look closely you'll see that the padded string has just been\npre-coerced to the length of the char() target field. I don't think\nthat's wrong.\n\nThe difference is normally masked from causing a comparison failure\nin the regress tests because we use diff -w to look for differences.\nProbably the expected file was last updated at a time when it wasn't\ndoing that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 11:24:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> OK, but that doesn't really solve my concern about rule bloat, because\n> if you write \"FROM table alias\", you'll still get a list of column names\n> appended to that by the system.\n> Here is a possible answer that I think would address both our concerns:\n> keep track of how many column aliases the user actually wrote (without\n> counting the Attr-list entries added by the system for its internal\n> convenience), and dump only the user-supplied aliases in rule strings.\n> This'd be easy enough to do with an extra field in Attr nodes.\n> It'd not only preserve compactness in the cases we previously handled,\n> but it'd make the reverse-listed display of rules more like the original\n> query in cases where the user did write some aliases (but not a full\n> set).\n\nI put the Attr node into the rte because column aliases need to travel\nwith the table alias. But I'm not sure how the table alias is actually\nused after being transformed back and forth for rules. And I'm not\nsure how we would use the column aliases beyond the parser in the\nfuture.\n\nHow about if I have the rte->ref Attr node hold *only* the column\naliases specified by the user (which addresses your concern), and then\nmake a \"hidden\" Attr node (or list of nodes; see below) which is build\nand used in the parser but which is never read or written by the\ndump/transformation stuff used for rules. So I'll define a new Attr *\nfield, say \"p_ref\" which is used internally but ignored after I'm done\nwith it. I'm not *certain* this will work: I still have issues\nregarding outer join syntax which I'm pretty sure are not addressed by\neither the status quo or this new suggestion, but at least with a\n\"hidden field\" I'd have some flexibility to muck around with how it is\ndefined and used.\n\nAlso, the \"layered aliases\" you can get with outer joins are not\nhandled yet, and I'm pretty sure that more work needs to be done on\nthe structures to get it to fly at all. e.g.\n\n SELECT n, m FROM (t1 ta (a, b) OUTER JOIN t2 tb (a, c)) AS tj (n,\nm);\n\ncannot currently be transformed properly for rules given the info\navailable in the existing structures. This is true because there is no\nequivalent query which allows you to specify anything like t1, ta, t2,\nor tb in the target list, and there is no way currently to carry along\nthe \"tj (n, m)\" info.\n\nComments on any or all?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 19:02:57 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "> >>>> I'm currently (2000-02-16 15:40 GMT) seeing the rules test\n> >>>> blank-filling the \"bpchar\" fields. Do you see that?\n> > Hmm. Still seeing it; here is a snippet from a diff of\n> > results/rules.out and expected/rules.out:\n> Oh, I'm sorry, I *am* seeing that. I don't think this has anything\n> to do with your changes; the system's been producing pre-padded\n> strings in those tests for a while now, at least on good days ;-).\n> If you look closely you'll see that the padded string has just been\n> pre-coerced to the length of the char() target field. I don't think\n> that's wrong.\n\nAh, right; \"bpchar\" is \"blank padded char\". But would there be any\ndownside to removing those blank pads when doing the transformation\nback to a printed query? i.e. if the outnode() functions stripped the\npadding? Or maybe at that point there is not enough info to do it?\n\nSeems like an ill-advised char(2000) or two in a table might bollux up\na lot of potential rules (even more than my extraneous column aliases\nmight ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 19:19:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Ah, right; \"bpchar\" is \"blank padded char\". But would there be any\n> downside to removing those blank pads when doing the transformation\n> back to a printed query? i.e. if the outnode() functions stripped the\n> padding? Or maybe at that point there is not enough info to do it?\n\nThere's not enough info to know whether trailing spaces were inserted\nby the system or given by the user. I'd be pretty uncomfortable with\ntrying to make outfuncs.c apply a potentially semantics-changing\ntransformation like that. It isn't nearly smart enough to do the right\nthing at present, and trying to make it smart enough seems like the\nwrong direction to go in.\n\n> Seems like an ill-advised char(2000) or two in a table might bollux up\n> a lot of potential rules (even more than my extraneous column aliases\n> might ;)\n\nGood point... of course people will be hitting other problems besides\nrule length with such things, but...\n\nPerhaps the right answer here is that addition of length-coercion\nfunctions to an INSERT or UPDATE's targetlist entries doesn't belong\nin the parser, but should be handled downstream of the rule stuff\n--- right before the planner's constant-folding step seems like a\ngood spot. Then we wouldn't be paying for either the padding (if\nthe function got constant-folded out) or the function call (if not)\nin the stored rule's querytree.\n\nThis would also allow ruleutils.c to get rid of some grotty code it has\nto try to hide said functions in the reverse-listed query. Might make\nlife a little easier for the rule rewriter too, I dunno.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 18:40:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases "
},
{
"msg_contents": "> cannot currently be transformed properly for rules given the info\n> available in the existing structures. This is true because there is no\n> equivalent query which allows you to specify anything like t1, ta, t2,\n> or tb in the target list, and there is no way currently to carry along\n> the \"tj (n, m)\" info.\n> \n> Comments on any or all?\n\nIt makes my head hurt. Is that a comment?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Feb 2000 23:53:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "> It makes my head hurt. Is that a comment?\n\n:)\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Feb 2000 05:16:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Almost there on column aliases"
}
] |
[
{
"msg_contents": "Do you want your website be hosted on fully qualyfied\ndomain name (eg. com, org, net?) for FREE?\n\nThen visit http://www.ventura.vu/ now and see the details.\nYou can also apply for FREE shell, mail or ftp account on\nour server.\n\nDownload latest exploits and many security software.\n\nEnjoy, http://www.ventura.vu/\n\n\n",
"msg_date": "Fri, 11 Feb 2000 09:55:56 +0100",
"msg_from": "\"ventura\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Visit www.ventura.vu"
}
] |
[
{
"msg_contents": "I'm having trouble getting updates from the cvsup server. My client\nconnects, but then just hangs. Does anyone else see this? Could\nsomeone do a preventative restart on the cvsupd server? TIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 11 Feb 2000 15:51:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "cvsupd OK?"
},
{
"msg_contents": "> I'm having trouble getting updates from the cvsup server. My client\n> connects, but then just hangs. Does anyone else see this? Could\n> someone do a preventative restart on the cvsupd server? TIA\n> \n> - Thomas\n\nLooks like it is working here, though a little slower than usual.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 11 Feb 2000 12:52:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvsupd OK?"
},
{
"msg_contents": "* Thomas Lockhart <[email protected]> [000211 08:19] wrote:\n> I'm having trouble getting updates from the cvsup server. My client\n> connects, but then just hangs. Does anyone else see this? Could\n> someone do a preventative restart on the cvsupd server? TIA\n\nSee if adding either of these flags to your cvsup command help:\n\n-P m\n-P -\n\n-Alfred\n",
"msg_date": "Fri, 11 Feb 2000 11:23:37 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvsupd OK?"
}
] |
[
{
"msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tBilly G. Allie\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: UnixWare 7.0.1\n\n PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-6.6\n\n Compiler used (example: gcc 2.8.0)\t\t: SCO UDK\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n1. ecpg fails to link. The error message is:\n\n Undefined first referenced\n symbol in file\n nocachegetattr pgc.o\n\n2. trigger.c fails to compile due to a syntax error. It contains\n a switch statement that has an empty default label. A label of a\n switch statement must be followed by a statement (or a label which\n is followed by a statement (or a label which ...)).\n\n3. Files include stringinfo.h failed to compile. The macro,\n 'appendStringInfoCharMacro' is implemented with a '?:' operation\n that returns a void expression for the true part and a char expresion\n for the false part. Both the true and false parts of the '?:' oper-\n ator must return the same type.\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n1. Compile ecpg.\n\n2. Compile with a ANSI C compiler that enforces the standard :->\n\n3. Compile with an ANSI C compiler that enforces the standard :->\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n1. Don't know.\n\n2. Apply the attached patch.\n\n3. Apply the attached patch.\n\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |",
"msg_date": "Sun, 13 Feb 2000 00:52:36 -0500",
"msg_from": "\"Billy G. Allie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems compiling latest CVS sources."
},
{
"msg_contents": "Applied. \n\n-- Start of PGP signed section.\n> ============================================================================\n> POSTGRESQL BUG REPORT TEMPLATE\n> ============================================================================\n> \n> \n> Your name\t\t:\tBilly G. Allie\n> Your email address\t:\[email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Intel Pentium\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: UnixWare 7.0.1\n> \n> PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-6.6\n> \n> Compiler used (example: gcc 2.8.0)\t\t: SCO UDK\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> 1. ecpg fails to link. The error message is:\n> \n> Undefined first referenced\n> symbol in file\n> nocachegetattr pgc.o\n> \n> 2. trigger.c fails to compile due to a syntax error. It contains\n> a switch statement that has an empty default label. A label of a\n> switch statement must be followed by a statement (or a label which\n> is followed by a statement (or a label which ...)).\n> \n> 3. Files include stringinfo.h failed to compile. The macro,\n> 'appendStringInfoCharMacro' is implemented with a '?:' operation\n> that returns a void expression for the true part and a char expresion\n> for the false part. Both the true and false parts of the '?:' oper-\n> ator must return the same type.\n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> 1. Compile ecpg.\n> \n> 2. Compile with a ANSI C compiler that enforces the standard :->\n> \n> 3. Compile with an ANSI C compiler that enforces the standard :->\n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> 1. Don't know.\n> \n> 2. Apply the attached patch.\n> \n> 3. Apply the attached patch.\n> \n> \nContent-Description: uw720000213.patch\n\n[Attachment, skipping...]\n\n> ____ | Billy G. Allie | Domain....: [email protected]\n> | /| | 7436 Hartwell | Compuserve: 76337,2061\n> |-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n> |/ |LLIE | (313) 582-1540 | \n-- End of PGP section, PGP failed!\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 13 Feb 2000 08:20:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problems compiling latest CVS sources."
}
] |
[
{
"msg_contents": "\nMorning all ...\n\n\tI tried to jump into this a little while back, and fell flat on my\nface (tried to do too much simultaneously), so tonight I started again\njust trying to address one 'piece' at a time, and the end results look\npromising ...\n\n\tAvailable at ftp://ftp.postgresql.org/pub is a tar file called\npgsql-support.tar.gz that, right now, just contains a\n\"self-contained\" libpq distribution (I'm going to be converting things\nover one at a time)...\n\n\tThe idea is that there are alot of sites out there that don't need\nthe whole backend sources, they only need the bin/interfaces stuff, so why\ndownload a 7+meg file. Its also giving me a chance to play with\nlibtool/automake :)\n\n\tRight now, if you download the above file, untar it and type:\n\n\t./configure\n\tcd interfaces/libpq\n\tgmake\n\n\tIt will build both static and shared libraries for libpq ... or,\nrather, it does on both a FreeBSD 4.0-CURRENT and Solaris 2.6 machine,\nboth running gcc ...\n\n\tThe Solaris 2.6 machine I tried it on doesn't have libtool\ninstalled on it, so everything *appears* to be packaged properly ... \n\n\tThere are no 'template' files, or 'Makefile.port' files or\nanything like that ... \n\n\tNote that the source code included in the above is not up to date\nwith the regular source tree, this is as much a test of a concept as\nanything else. \n\n\tWhat I'm curious about right now is how well the above works on\nthe various architectures ... and whether it even works on non-gcc\nenvironments ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 14 Feb 2000 00:22:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql-support 'distribution' ..."
},
{
"msg_contents": "On Mon, 14 Feb 2000, The Hermit Hacker wrote:\n\n> \tThe idea is that there are alot of sites out there that don't need\n> the whole backend sources, they only need the bin/interfaces stuff, so why\n> download a 7+meg file. Its also giving me a chance to play with\n> libtool/automake :)\n\n> \tThere are no 'template' files, or 'Makefile.port' files or\n> anything like that ... \n\nHey Marc,\n\nI've had automake/libtool on my list of things to bring up for 7.1. If\nyou're already getting a head start, I better take a look at it. If you\ncan report good things from it, and no one else around here has any\nfundamental opposition to the concept, what do you think about us making a\npush for this once the new devel cycle starts?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 13:11:28 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgsql-support 'distribution' ..."
}
] |
[
{
"msg_contents": "subscribe hackers\n\nDavid Anthony\nmailto:[email protected]\t\n312-1800\n\n",
"msg_date": "Mon, 14 Feb 2000 11:22:23 +0200",
"msg_from": "\"davida\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscribe hackers"
}
] |
[
{
"msg_contents": "Could anyone please give me an example on how I can define a function using\nlibpq if that is at all possible? \n\nThanks.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Mon, 14 Feb 2000 10:54:16 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "function defined in libpq?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Mon, 14 Feb 2000, Michael Meskes wrote:\n>\n> > Could anyone please give me an example on how I can define a function using\n> > libpq if that is at all possible?\n>\n> Huh? I'm sure you don't mean adding a declaration to a header file and\n> putting the definition in a .c file of your choice, as well as ensuring\n> that it compiles?\n>\n> --\n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n>\n\nI think what he wants to do is dynamically create a function from the client\nside by issuing a CREATE FUNCTION statement (presumably one of the built-ins?\nSQL, pl/pgSQL, pl/pgTcl.) For what its worth, I've issued a host of unlreated\nDDL statements through PQexec and all of them have worked as expected, although\nCREATE FUNCTION was never one of them...\n\nMike Mascari\n\n\n\n",
"msg_date": "Tue, 15 Feb 2000 03:36:41 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Tue, 15 Feb 2000, Mike Mascari wrote:\n>\n> > > On Mon, 14 Feb 2000, Michael Meskes wrote:\n> > >\n> > > > Could anyone please give me an example on how I can define a function using\n> > > > libpq if that is at all possible?\n>\n> > I think what he wants to do is dynamically create a function from the client\n> > side by issuing a CREATE FUNCTION statement (presumably one of the built-ins?\n> > SQL, pl/pgSQL, pl/pgTcl.) For what its worth, I've issued a host of unlreated\n> > DDL statements through PQexec and all of them have worked as expected, although\n> > CREATE FUNCTION was never one of them...\n>\n> Why wouldn't it work? Or, much more interesting, how else would you do it?\n>\n>\n\nYes. Sorry. Stupid answer on my part. psql is so nice, particularly the new one :-),\nthat I keep forgetting it is dependent entirely upon libpq....\n\nMike Mascari\n\n>\n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 03:45:44 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
},
{
"msg_contents": "On Mon, 14 Feb 2000, Michael Meskes wrote:\n\n> Could anyone please give me an example on how I can define a function using\n> libpq if that is at all possible? \n\nHuh? I'm sure you don't mean adding a declaration to a header file and\nputting the definition in a .c file of your choice, as well as ensuring\nthat it compiles?\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 13:13:15 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 01:13:15PM +0100, Peter Eisentraut wrote:\n> Huh? I'm sure you don't mean adding a declaration to a header file and\n> putting the definition in a .c file of your choice, as well as ensuring\n> that it compiles?\n\nOops. No, of course not. I meant writing a C function using libpq, making it\na shared library and then adding it to the backend via CREATE FUNCTION.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 13:24:06 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Mike Mascari wrote:\n\n> > On Mon, 14 Feb 2000, Michael Meskes wrote:\n> >\n> > > Could anyone please give me an example on how I can define a function using\n> > > libpq if that is at all possible?\n\n> I think what he wants to do is dynamically create a function from the client\n> side by issuing a CREATE FUNCTION statement (presumably one of the built-ins?\n> SQL, pl/pgSQL, pl/pgTcl.) For what its worth, I've issued a host of unlreated\n> DDL statements through PQexec and all of them have worked as expected, although\n> CREATE FUNCTION was never one of them...\n\nWhy wouldn't it work? Or, much more interesting, how else would you do it?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 14:43:56 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 03:36:41AM -0500, Mike Mascari wrote:\n> I think what he wants to do is dynamically create a function from the client\n> side by issuing a CREATE FUNCTION statement (presumably one of the built-ins?\n> SQL, pl/pgSQL, pl/pgTcl.) For what its worth, I've issued a host of unlreated\n\nThe function itself is defined using libpq.\n\n> CREATE FUNCTION was never one of them...\n\nCreating the function through libpq works fine.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 15:13:26 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function defined in libpq?"
}
] |
[
{
"msg_contents": "Patch applied. Thanks.\n\n> Hi,\n> \n> I suspect that you are not the person to send this to, but I wasn't sure\n> where else to mail it. I am the maintainer of unixODBC, and we have a\n> set of code in our project that started life as the Postgres windows\n> ODBC driver, which has been ported back to unix. Anyway I have just\n> fixed a memory leak in the driver, and I cant see any mention of the fix\n> being done in the main Postgres code, so I thougth I would let you know.\n> \n> Its in the statement.c module, after the COMMIT statement has been\n> executed in SC_Execute, the code was\n> \n> /*// If we are in autocommit, we must send the commit. */\n> if ( ! self->internal && CC_is_in_autocommit(conn) &&\n> STMT_UPDATE(self)) {\n> CC_send_query(conn, \"COMMIT\", NULL);\n> CC_set_no_trans(conn);\n> }\n> \n> I have changed it to\n> \n> \n> /*// If we are in autocommit, we must send the commit. */\n> if ( ! self->internal && CC_is_in_autocommit(conn) &&\n> STMT_UPDATE(self)) {\n> res = CC_send_query(conn, \"COMMIT\", NULL);\n> QR_Destructor(res);\n> CC_set_no_trans(conn);\n> }\n> \n> Nick Gorham\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 14 Feb 2000 07:32:40 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres ODBC"
}
] |
[
{
"msg_contents": "Having been used to being able to bind tuple data from a result set to\nuser allocated memory (rather than having to copy the data out of the\n\"libpq\" like API's buffers to my memory locations) in my days working\nwith ODBC, Oracle & SQLServer's C-API, i created the following patch\nwhich i thought i'd submit. I've included a README as well. \n\nif anyone has a second it would be great to know if i'm doing something\nstupid or if there's anything i could do to get this patch to libpq in\nthe mainline releases. please CC: me as i'm not on this list. thanks.\n\njr\n\nFROM THE README:\n\nPOSTGRESQL BIND PATCH FOR POSTGRESQL VERSION 6.5.3\n\n1). INTRODUCTION\n\nanyone interested in a very simple binding API for PGSQL-libpq that has very little \nimpact on the libpq source code should read on.\n\nthe API is accessed through the following two functions:\n\n extern void PQsetResultDest(PGconn *conn, PGresAttValue* dest);\n extern void PQclearResultDest(PGconn *conn);\n\nwhich are found in libpq-fe.h once the patch is applied.\n\n2). HOW DO I USE THESE FUNCTIONS?\n\nuse libpq as normal, however when you want to bind the columns of a result set to\nspecific memory locations...\n\n a). construct an array of PGresAttValue's whose size is equal to the number of\n columns in the result set. if you don't you'll core dump!\n\n\ti.e. for \"select id, name from people\"\n\t >> PGresAttValue bind_vec[2];\n\n b). fill out the vector with the binding info. specifically each PGresAttValue\n must have a valid \"value\" ptr of the desired destination address, and\n and \"len\" that is equal to or bigger than the length of the column that will \n be returned.\t\n\n c). immediately before calling PQexec on a \"FETCH FORWARD 1\" sql statement call\n\t >> PQsetResultDest(conn, bind_vec); \t \t \n\n d). immediately after PQexec returns, call\n\t >> PQclearResultDest(conn);\n\n e). that's it. now the results of the fetch are in the memory locations \n you set up in your PGresAttValue array.\n\n3). EXTRA INFO\n\n a). if (PGresAttValue[i].len > column[i].len) then this patch will append a null\n terminator to the value. this happens to be very convenient when using \n strings.\n\n b). if (0==PGresAttValue[i].len) for any column i, then that column will\n not be bound but will be accessible through standard libpq API.\n\n4). BIGGER CODE SAMPLE\n \nhere's a more in depth code sample for a interface layer we have that sits on top\nof libpq (it uses the stdc++ library vector for the bind_vec shown above, and doesn't show step two which happens elsewhere) ...\n\nbool CCursorPGSql::fetch() throw (CDbError)\n{\n // build sql\n tchar cmd[128];\n if (-1==snprintf(cmd, sizeof(cmd), \"FETCH FORWARD 1 IN %s\", cursor_name))\n { \n CDbError err; \n snprintf(err.message, sizeof(err.message), \n\t_text(\"fetch: cmd buffer too short\")); \n throw err; \n }\n\n // setup bind locations\n if (bind_vector.size()>0)\n PQsetResultDest(db->postgres_handle(), bind_vector.begin());\n\n // execute it \n pg::result res(PQexec(db->postgres_handle(), cmd));\n const int rc = PQresultStatus(res);\n if (PGRES_BAD_RESPONSE==rc || PGRES_NONFATAL_ERROR==rc || PGRES_FATAL_ERROR==rc) THROW_DBERROR(db->postgres_handle());\n\n // clear bindings on connection\n if (bind_vector.size()>0) \n PQclearResultDest(db->postgres_handle());\n\n // return code means any data returned?\n return (PQntuples(res));\n}\n\nand the patch itself (against 6.5.2 but cleanly applies to 6.5.3)\n\ndiff -u postgresql-6.5.2/src/interfaces/libpq/fe-exec.c postgresql-6.5.2-with-bind/src/interfaces/libpq/fe-exec.c\n--- postgresql-6.5.2/src/interfaces/libpq/fe-exec.c\tThu May 27 21:54:53 1999\n+++ postgresql-6.5.2-with-bind/src/interfaces/libpq/fe-exec.c\tThu Oct 21 15:32:28 1999\n@@ -869,19 +869,53 @@\n \t\t\t\tvlen = vlen - 4;\n \t\t\tif (vlen < 0)\n \t\t\t\tvlen = 0;\n-\t\t\tif (tup[i].value == NULL)\n-\t\t\t{\n+\t\t\tif ((conn->tuple_destinations == NULL) || \n+\t\t\t (0==conn->tuple_destinations[i].len_max))\n+\t\t\t {\n+\n+\t\t\t if (tup[i].value == NULL)\n+\t\t\t {\n \t\t\t\ttup[i].value = (char *) pqResultAlloc(result, vlen + 1, binary);\n \t\t\t\tif (tup[i].value == NULL)\n-\t\t\t\t\tgoto outOfMemory;\n-\t\t\t}\n-\t\t\ttup[i].len = vlen;\n-\t\t\t/* read in the value */\n-\t\t\tif (vlen > 0)\n-\t\t\t\tif (pqGetnchar((char *) (tup[i].value), vlen, conn))\n+\t\t\t\t goto outOfMemory;\n+\t\t\t }\n+\n+\t\t\t /* read in the value */\n+\t\t\t if (vlen > 0)\n+\t\t\t if (pqGetnchar((char *) (tup[i].value), vlen, conn))\n+\t\t\t\treturn EOF;\n+\t\t\t /* we have to terminate this ourselves */\n+\t\t\t tup[i].value[vlen] = '\\0';\n+\t\t\t }\n+\t\t\telse\n+\t\t\t {\n+\t\t\t\tif (conn->tuple_destinations[i].len_max < vlen)\n+\t\t\t\t{\n+\t\t\t\t\tpqClearAsyncResult(conn);\n+\t\t\t\t\tsprintf(conn->errorMessage,\n+\t\t\t\t\t\t\t\"getAnotherTuple() -- column %d is %d bytes larger than bound destination\\n\", i, vlen-conn->tuple_destinations[i].len_max);\n+\t\t\t\t\tconn->result = PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR);\n+\t\t\t\t\tconn->asyncStatus = PGASYNC_READY;\n+\t\t\t\t\t/* Discard the broken message */\n+\t\t\t\t\tconn->inStart = conn->inEnd;\n \t\t\t\t\treturn EOF;\n-\t\t\t/* we have to terminate this ourselves */\n-\t\t\ttup[i].value[vlen] = '\\0';\n+\t\t\t\t}\n+\n+\t\t\t\t/* we set length returned no matter what */\n+\t\t\t *(conn->tuple_destinations[i].len_returned) = vlen;\n+\n+\t\t\t\t/* read in the value */\n+\t\t\t\tif (vlen > 0)\n+\t\t\t\t{\n+\t\t\t\t if (pqGetnchar((char *) (conn->tuple_destinations[i].value), vlen, conn))\n+\t\t\t\t return EOF;\n+\n+\t\t\t\t /* we only null terminate when there's space */\n+\t\t\t\t if (conn->tuple_destinations[i].len_max > vlen)\n+\t\t\t\t conn->tuple_destinations[i].value[vlen] = '\\0';\n+\t\t\t\t}\n+\t\t\t }\n+\t\t\ttup[i].len = vlen;\n \t\t}\n \t\t/* advance the bitmap stuff */\n \t\tbitcnt++;\n@@ -1921,4 +1955,18 @@\n \t\treturn 1;\n \telse\n \t\treturn 0;\n+}\n+\n+void\n+PQsetResultDest(PGconn* conn, PGbinding* _dest)\n+{\n+ if (0==conn) return;\n+ conn->tuple_destinations = _dest;\n+}\n+\n+void\n+PQclearResultDest(PGconn* conn)\n+{\n+ if (0==conn) return;\n+ conn->tuple_destinations = 0;\n }\ndiff -u postgresql-6.5.2/src/interfaces/libpq/libpq-fe.h postgresql-6.5.2-with-bind/src/interfaces/libpq/libpq-fe.h\n--- postgresql-6.5.2/src/interfaces/libpq/libpq-fe.h\tTue May 25 12:15:13 1999\n+++ postgresql-6.5.2-with-bind/src/interfaces/libpq/libpq-fe.h\tThu Oct 21 15:37:04 1999\n@@ -27,6 +27,13 @@\n \n /* Application-visible enum types */\n \n+\ttypedef struct pgbinding\n+\t{\n+\t int len_max;\t\t/* [IN] length in bytes of the value buffer */\n+\t int* len_returned; /* [OUT] pointer to int that receives bytes returned */ \n+\t char* value;\t/* [OUT] actual value returned */\n+\t} PGbinding;\n+\n \ttypedef enum\n \t{\n \t\tCONNECTION_OK,\n@@ -198,6 +205,10 @@\n \t\t\t\t\t\t\t\t\t\t\t\t void *arg);\n \n /* === in fe-exec.c === */\n+\n+\t/* result destinationn functions (for column binding and other things...)*/\n+\textern void PQsetResultDest(PGconn *conn, PGbinding* dest);\n+\textern void PQclearResultDest(PGconn *conn);\n \n \t/* Simple synchronous query */\n \textern PGresult *PQexec(PGconn *conn, const char *query);\nOnly in postgresql-6.5.2-with-bind/src/interfaces/libpq: libpq-fe.h~\ndiff -u postgresql-6.5.2/src/interfaces/libpq/libpq-int.h postgresql-6.5.2-with-bind/src/interfaces/libpq/libpq-int.h\n--- postgresql-6.5.2/src/interfaces/libpq/libpq-int.h\tTue May 25 18:43:49 1999\n+++ postgresql-6.5.2-with-bind/src/interfaces/libpq/libpq-int.h\tThu Oct 21 15:34:36 1999\n@@ -217,6 +217,9 @@\n \tPGresult *result;\t\t\t/* result being constructed */\n \tPGresAttValue *curTuple;\t/* tuple currently being read */\n \n+ /* optional column bind location */\n+ PGbinding *tuple_destinations; \n+ \n \t/* Message space. Placed last for code-size reasons. */\n \tchar\t\terrorMessage[ERROR_MSG_LENGTH];\n };\n\n------------------------------------------------------------------------\nJoel W. Reed http://ruby.ddiworld.com/jreed\n----------------We're lost, but we're making good time.----------------",
"msg_date": "Mon, 14 Feb 2000 09:03:34 -0500",
"msg_from": "Joel Reed <[email protected]>",
"msg_from_op": true,
"msg_subject": "patch for binding tuples from result set to user allocated memory"
}
] |
[
{
"msg_contents": "\nActually, even currently, limit and order a non unique\norder by can skip results if the table is being modified.\nEven if no new rows are entered, as long as a row\non the border of the limit has been modified, you can\nget indeterminate results.\n\nacroyear=> create table test1 (a int, b varchar(10), c int);\nCREATE\nacroyear=> insert into test1 values (1, 'a', 1);\nINSERT 748222 1\nacroyear=> insert into test1 values (2, 'a', 1);\nINSERT 748223 1\nacroyear=> insert into test1 values (3, 'a', 1);\nINSERT 748224 1\nacroyear=> insert into test1 values (4, 'a', 1);\nINSERT 748225 1\nacroyear=> insert into test1 values (4, 'b', 2);\nINSERT 748226 1\nacroyear=> insert into test1 values (5, 'a', 1);\nINSERT 748227 1\nacroyear=> insert into test1 values (6, 'a', 1);\nINSERT 748228 1\nacroyear=> insert into test1 values (7, 'a', 1);\nINSERT 748229 1\nacroyear=> select a,b from test1 order by a limit 4;\na|b\n-+-\n1|a\n2|a\n3|a\n4|a\n(4 rows)\n\nacroyear=> update test1 set c=3 where a=4 and b='a';\nUPDATE 1\nacroyear=> select a,b from test1 order by a offset 4 limit 4;\na|b\n-+-\n4|a\n5|a\n6|a\n7|a\n(4 rows)\n\n",
"msg_date": "Mon, 14 Feb 2000 10:55:14 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Limit and Order by stuff"
}
] |
[
{
"msg_contents": "\nPostgreSQL, Inc has been approached by a third party concerning getting\nReplication implemented. Currently, we are seeking resume's and rates for\nany developer that would be interested in, and has the time to, work on\nsuch.\n\nInterested ppl should have a strong C background, knowledge of PostgreSQL\ninternals and the ability to work with others ... basically, the end\nresult has to be approved by the PostgreSQL Core Team to be acceptable.\n\nResume/CVs should be forwarded to [email protected]\n\nAlso, if any other companies and/or individuals would like to help fund\nthis, please contact [email protected], who will act as liason ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Mon, 14 Feb 2000 12:50:36 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Programmers needed to implement Replication ..."
}
] |
[
{
"msg_contents": "> Looks good. Very clear. C structure changes appear minimal.\n\nBruce (aka King of patchers :)\n\nI've got a slightly modified patchset to apply this evening (though if\nyou have already applied this one it won't be a big problem). Will do\nit in ~8 hours, barring network troubles at hub.org ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 14 Feb 2000 17:16:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Re: [HACKERS] Almost there on column aliases"
},
{
"msg_contents": "> > Looks good. Very clear. C structure changes appear minimal.\n> \n> Bruce (aka King of patchers :)\n> \n> I've got a slightly modified patchset to apply this evening (though if\n> you have already applied this one it won't be a big problem). Will do\n> it in ~8 hours, barring network troubles at hub.org ;)\n\nNo, I don't apply for committers. I know they prefer to do their own.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 14 Feb 2000 12:28:00 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: [HACKERS] Almost there on column aliases"
}
] |
[
{
"msg_contents": "Are we still 'go' for a beta release the 15th?\n\nBefore I pull a latenighter RPM building.....it would be nice to have an\nidea. TIA!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 14 Feb 2000 16:19:16 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Release on the 15th?"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Are we still 'go' for a beta release the 15th?\n\nUm ... I'm not ready ...\n\nCouple more days, Marc?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Feb 2000 17:46:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th? "
},
{
"msg_contents": "On Mon, 14 Feb 2000, Tom Lane wrote:\n\n> Lamar Owen <[email protected]> writes:\n> > Are we still 'go' for a beta release the 15th?\n> \n> Um ... I'm not ready ...\n> \n> Couple more days, Marc?\n\nSay Monday?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 14 Feb 2000 21:19:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th? "
},
{
"msg_contents": "> Are we still 'go' for a beta release the 15th?\n> Before I pull a latenighter RPM building.....it would be nice to have an\n> idea. TIA!\n\nUh, it's never been a good idea to set your watch by our beta release\nschedule. In fact, for purposes of RPM building, I'd suggest lagging\neven up to a day or two to see if some immediate problems crop up and\nare fixed.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 15 Feb 2000 02:44:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Couple more days, Marc?\n\n> Say Monday?\n\nI think I can do Monday, but I don't know where Thomas is. Doesn't\nhe still want to squeeze in the date/time type consolidation?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 01:33:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th? "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Are we still 'go' for a beta release the 15th?\n\n> Uh, it's never been a good idea to set your watch by our beta release\n> schedule.\n\nI have found that out, acutally, with the three minors to 6.5.\n\n> In fact, for purposes of RPM building, I'd suggest lagging\n> even up to a day or two to see if some immediate problems crop up and\n> are fixed.\n\nWell, having been on the other side of the fence, so to speak, WRT the\nRPM's, I may be overreacting a little. I was always aggravated and\nannoyed by the long lag (up to six months prior to 6.5) between a\nPostgreSQL release and an RPM for me to bang on. While I could very\nwell have pulled the source tarball and gone through a conversion from\nan RPM installation to a tarball installation, I was not too enamored of\nthat approach, just to have to move the other way at RedHat-upgrade\ntime.\n\nSo, I got involved in the RPM building process primarily so that RPM\nPostgreSQL users that want to beta test (if I was interested in doing\nso, I know there were and are more) can have a timely beta to test. \nMaybe I'm a little too zealous in this regards....\n\nHowever, even the packaging itself this go around is beta. I will want\nto have RPM's out there being tested long before a final 7.0 release. \nPlus, since I am building on RedHat for the RPM's, I catch build-time\nbugs for that OS early.\n\nBut, if you feel I should lag a couple of days, I can do that.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 15 Feb 2000 10:59:39 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Release on the 15th?"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> Couple more days, Marc?\n> \n> > Say Monday?\n> \n> I think I can do Monday, but I don't know where Thomas is. Doesn't\n> he still want to squeeze in the date/time type consolidation?\n\nOkay, let's set Monday for now, and re-evaluate, let's say, Friday, as to\nwhether we need to postpone a little bit more ... \n\n\n",
"msg_date": "Tue, 15 Feb 2000 13:27:06 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th? "
},
{
"msg_contents": "> I think I can do Monday, but I don't know where Thomas is. Doesn't\n> he still want to squeeze in the date/time type consolidation?\n\nI'm building to test now. And will be out of town from this weekend\nthrough the next (9 days). I should be able to get the datetime stuff\nand Jan's parser stuff done beforehand...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 03:05:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Release on the 15th?"
}
] |
[
{
"msg_contents": "\nOkay, I may be missing something here, but:\n\ngmake[2]: Entering directory `/usr/local/pgsql/src/pgsql/src/backend/parser'\ngcc -I../../include -I../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I.. -Wno-error -c scansup.c -o scansup.o\nIn file included from scansup.c:20:\n../../include/miscadmin.h:225: syntax error before `pid'\ngmake[2]: *** [scansup.o] Error 1\n\nLooking at include/miscadmin.h:\n\n=========\nextern int SetPidFile(pid_t pid);\n\n#endif /* MISCADMIN_H */\n=========\n\nbut I can't find anywhere that pid_t is defined, and the cvs logs don't\nappear to indicate that anyone has touched that file in a few weeks ...\n\nSo, am I missing something? This is using CVS source as of today, on\nFreeBSD 4.0-CURRENT ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 14 Feb 2000 21:11:23 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "pid_t define missing in include/miscadmin.h ..."
},
{
"msg_contents": "pid_t is in /usr/include/sys/types.h. Maybe it is missing that include?\n\n\n\n> Okay, I may be missing something here, but:\n> \n> gmake[2]: Entering directory `/usr/local/pgsql/src/pgsql/src/backend/parser'\n> gcc -I../../include -I../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I.. -Wno-error -c scansup.c -o scansup.o\n> In file included from scansup.c:20:\n> ../../include/miscadmin.h:225: syntax error before `pid'\n> gmake[2]: *** [scansup.o] Error 1\n> \n> Looking at include/miscadmin.h:\n> \n> =========\n> extern int SetPidFile(pid_t pid);\n> \n> #endif /* MISCADMIN_H */\n> =========\n> \n> but I can't find anywhere that pid_t is defined, and the cvs logs don't\n> appear to indicate that anyone has touched that file in a few weeks ...\n\n\n\n> \n\n\n> So, am I missing something? This is using CVS source as of today, on\n> FreeBSD 4.0-CURRENT ... \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 14 Feb 2000 21:08:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid_t define missing in include/miscadmin.h ..."
},
{
"msg_contents": "On Mon, 14 Feb 2000, Bruce Momjian wrote:\n\n> pid_t is in /usr/include/sys/types.h. Maybe it is missing that include?\n\nokay guys, I've had two answers so far that I would *really* have to be a\nbad programmer not to have already checked :)\n\nmiscadmin.h doesn't include sys/types.h ... if I add sys/types.h to\nmiscadmin.h, it compiles fine ... the first place I looked was\n/usr/include/sys/types.h for this ...\n\nAccording to sources as of a couple of hours ago, sys/types.h isn't\nincluded in anywhere I can think of off hand:\n\n%ls\nCVS config.h mb port storage\naccess config.h.in miscadmin.h postgres.h strdup.h\nbootstrap dynloader.h nodes postgres_ext.h tcop\nc.h executor optimizer regex utils\ncatalog lib os.h rewrite version.h\ncommands libpq parser rusagestub.h\nversion.h.in\n%grep TYPE config.h\n#define SOCKET_SIZE_TYPE size_t\n%grep type.h *.h\n%grep type.h */*.h\ncatalog/pg_type.h: * pg_type.h\ncatalog/pg_type.h: * $Id: pg_type.h,v 1.79 2000/01/26 05:57:59 momjian Exp\nexecutor/spi.h:#include \"catalog/pg_type.h\"\nparser/parse_coerce.h:#include \"catalog/pg_type.h\"\nparser/parse_expr.h:#include \"parser/parse_type.h\"\nparser/parse_type.h: * parse_type.h\nparser/parse_type.h: * $Id: parse_type.h,v 1.12 2000/01/26 05:58:27\nmomjian Exp $\nutils/acl.h: change the aclitem typlen in pg_type.h */\nutils/inet.h: /* add IPV6 address type here */\n\nAnd I've checked scansup.c itself, which includes <ctype.h>, but <ctype.h>\ndoesn't include <sys/types.h> either ... or does it on other ppls OSs?\n\nBasically, where are other ppl getting <sys/types.h> included? :)\n\n\n\n > \n> \n> \n> > Okay, I may be missing something here, but:\n> > \n> > gmake[2]: Entering directory `/usr/local/pgsql/src/pgsql/src/backend/parser'\n> > gcc -I../../include -I../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I.. -Wno-error -c scansup.c -o scansup.o\n> > In file included from scansup.c:20:\n> > ../../include/miscadmin.h:225: syntax error before `pid'\n> > gmake[2]: *** [scansup.o] Error 1\n> > \n> > Looking at include/miscadmin.h:\n> > \n> > =========\n> > extern int SetPidFile(pid_t pid);\n> > \n> > #endif /* MISCADMIN_H */\n> > =========\n> > \n> > but I can't find anywhere that pid_t is defined, and the cvs logs don't\n> > appear to indicate that anyone has touched that file in a few weeks ...\n> \n> \n> \n> > \n> \n> \n> > So, am I missing something? This is using CVS source as of today, on\n> > FreeBSD 4.0-CURRENT ... \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 14 Feb 2000 22:26:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pid_t define missing in include/miscadmin.h ..."
},
{
"msg_contents": "On Mon, 14 Feb 2000, Alfred Perlstein wrote:\n\n> * The Hermit Hacker <[email protected]> [000214 17:43] wrote:\n> > \n> > Okay, I may be missing something here, but:\n> > \n> > gmake[2]: Entering directory `/usr/local/pgsql/src/pgsql/src/backend/parser'\n> > gcc -I../../include -I../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I.. -Wno-error -c scansup.c -o scansup.o\n> > In file included from scansup.c:20:\n> > ../../include/miscadmin.h:225: syntax error before `pid'\n> > gmake[2]: *** [scansup.o] Error 1\n> > \n> > Looking at include/miscadmin.h:\n> > \n> > =========\n> > extern int SetPidFile(pid_t pid);\n> > \n> > #endif /* MISCADMIN_H */\n> > =========\n> > \n> > but I can't find anywhere that pid_t is defined, and the cvs logs don't\n> > appear to indicate that anyone has touched that file in a few weeks ...\n> > \n> > So, am I missing something? This is using CVS source as of today, on\n> > FreeBSD 4.0-CURRENT ... \n> \n> Someone forgot to include <sys/types.h>. I brought this up before but my\n> inexperiance with the postgresql build leaves me without a solution except\n> a simple #include directive in the offending file.\n\nThat's what I'm thinking too ... but *so far* its looking like its only\naffecting the FreeBSDers :(\n\n\n",
"msg_date": "Mon, 14 Feb 2000 22:46:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pid_t define missing in include/miscadmin.h ..."
},
{
"msg_contents": "* The Hermit Hacker <[email protected]> [000214 17:43] wrote:\n> \n> Okay, I may be missing something here, but:\n> \n> gmake[2]: Entering directory `/usr/local/pgsql/src/pgsql/src/backend/parser'\n> gcc -I../../include -I../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I.. -Wno-error -c scansup.c -o scansup.o\n> In file included from scansup.c:20:\n> ../../include/miscadmin.h:225: syntax error before `pid'\n> gmake[2]: *** [scansup.o] Error 1\n> \n> Looking at include/miscadmin.h:\n> \n> =========\n> extern int SetPidFile(pid_t pid);\n> \n> #endif /* MISCADMIN_H */\n> =========\n> \n> but I can't find anywhere that pid_t is defined, and the cvs logs don't\n> appear to indicate that anyone has touched that file in a few weeks ...\n> \n> So, am I missing something? This is using CVS source as of today, on\n> FreeBSD 4.0-CURRENT ... \n\nSomeone forgot to include <sys/types.h>. I brought this up before but my\ninexperiance with the postgresql build leaves me without a solution except\na simple #include directive in the offending file.\n\n-Alfred\n",
"msg_date": "Mon, 14 Feb 2000 18:55:24 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pid_t define missing in include/miscadmin.h ..."
}
] |
[
{
"msg_contents": "subscribe\n\n\n\n\n\n\n\nsubscribe",
"msg_date": "Tue, 15 Feb 2000 15:09:20 +1300",
"msg_from": "\"aubrey\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "I'm CC'ng this to the -hackers list, as this may be something that should\nbe looked into more deeply, as ppl start looking at pg_dump'ng their\ndatabases to upgrade to 7.0 ...\n\nIncluded is a pg_dump from a 7.0 system that does work ... under my v6.5.3\nsystem, doing the same thing dies at the \\connect - ipmeter shown below\n... the v6.5.3 system is what is running on postgresql.org/hub.org, which\nis a FreeBSD 3.4-STABLE server ...\n\nI'm going to try a build of v6.5.3 on my home machine and see if i can\nrecreate the seg fault ...\n\nOn Tue, 15 Feb 2000, Sevo Stille wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > Good question ... I'm getting:\n> > \n> > pq_recvbuf: unexpected EOF on client connection\n> > \n> > from the backend, which *sounds* like psql is crashing ...\n> > \n> > gdb shows it dying:\n> > \n> > (gdb) where\n> > #0 0x4814d0bc in strcmp () from /usr/lib/libc.so.3\n> > #1 0x804fb28 in becomeUser ()\n> > #2 0x804f268 in dumpIndices ()\n> > #3 0x80501fa in dumpSchemaIdx ()\n> > #4 0x804a8c2 in main ()\n> > #5 0x80494dd in _start ()\n> \n> Strange that it seems to trap the error and exit, though. Well, I'll\n> have a look at the source.\n> \n> > I've just gotten v7.0 compiled and installed ...\n> \n> Good. That ought to isolate the error a bit further.\n\nIts definitely not a problem with v7.0 ... just got her all up and\nrunning, as far as the database is concerned ...\n\n-----------------------------\nCREATE OPERATOR >>= (PROCEDURE = port_pinecmp ,\n LEFTARG = port ,\n RIGHTARG = int4 );\n\\connect - ipmeter\nCREATE UNIQUE INDEX \"users_name_key\" on \"users\" using btree ( \"name\" \"text_ops\" );\nCREATE UNIQUE INDEX \"importerstatus_filename_key\" on \"importerstatus\" using btree ( \"filename\" \"text_ops\" );\n-----------------------------\n\nfrom what I can tell, its at the \\connect - ipmeter part that it dumps ...",
"msg_date": "Mon, 14 Feb 2000 22:44:49 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: schema: pg_dump -s ipmeter (fwd)"
},
{
"msg_contents": "On Mon, 14 Feb 2000, The Hermit Hacker wrote:\n\n> On Tue, 15 Feb 2000, Sevo Stille wrote:\n> \n> > The Hermit Hacker wrote:\n> > > \n> > > Good question ... I'm getting:\n> > > \n> > > pq_recvbuf: unexpected EOF on client connection\n> > > \n> > > from the backend, which *sounds* like psql is crashing ...\n> > > \n> > > gdb shows it dying:\n> > > \n> > > (gdb) where\n> > > #0 0x4814d0bc in strcmp () from /usr/lib/libc.so.3\n> > > #1 0x804fb28 in becomeUser ()\n> > > #2 0x804f268 in dumpIndices ()\n> > > #3 0x80501fa in dumpSchemaIdx ()\n> > > #4 0x804a8c2 in main ()\n> > > #5 0x80494dd in _start ()\n\nThat looks more like a pg_dump trace to me. If you didn't know yet,\npg_dump 7.0 can't be used with previous databases, so maybe that's a\nreason.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 13:24:49 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: schema: pg_dump -s ipmeter (fwd)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> That looks more like a pg_dump trace to me. If you didn't know yet,\n> pg_dump 7.0 can't be used with previous databases, so maybe that's a\n> reason.\n\nIf I got Marc right, pg_dump 6.5.3 was crashing on him while dumping\nfrom a 6.5.3 db, and he can dump from 7.0 as well as from another 6.5.3\ninstallation. It rather looks like becomeUser croaks the fourth time it\nis called and the first time it is switching users, on that particular\ninstallation. Might be something gone wrong with the user, but it could\nas well be some obscure overflow problem that only rarely triggers. \n\nSevo\n",
"msg_date": "Tue, 15 Feb 2000 13:58:02 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: schema: pg_dump -s ipmeter (fwd)"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Peter Eisentraut wrote:\n\n> On Mon, 14 Feb 2000, The Hermit Hacker wrote:\n> \n> > On Tue, 15 Feb 2000, Sevo Stille wrote:\n> > \n> > > The Hermit Hacker wrote:\n> > > > \n> > > > Good question ... I'm getting:\n> > > > \n> > > > pq_recvbuf: unexpected EOF on client connection\n> > > > \n> > > > from the backend, which *sounds* like psql is crashing ...\n> > > > \n> > > > gdb shows it dying:\n> > > > \n> > > > (gdb) where\n> > > > #0 0x4814d0bc in strcmp () from /usr/lib/libc.so.3\n> > > > #1 0x804fb28 in becomeUser ()\n> > > > #2 0x804f268 in dumpIndices ()\n> > > > #3 0x80501fa in dumpSchemaIdx ()\n> > > > #4 0x804a8c2 in main ()\n> > > > #5 0x80494dd in _start ()\n> \n> That looks more like a pg_dump trace to me. If you didn't know yet,\n> pg_dump 7.0 can't be used with previous databases, so maybe that's a\n> reason.\n\nI *really* hope you don't mean that you cn't pg_dump from v6.5.3 (what I\nwas trying) and reload again into 7.0? :)\n\n\n",
"msg_date": "Tue, 15 Feb 2000 13:23:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: schema: pg_dump -s ipmeter (fwd)"
},
{
"msg_contents": "On Tue, 15 Feb 2000, The Hermit Hacker wrote:\n\n> I *really* hope you don't mean that you cn't pg_dump from v6.5.3 (what I\n> was trying) and reload again into 7.0? :)\n\nNo, what I meant was that you can't use pg_dump 7.0 to dump non-7.0\ndatabases. At least it was like this last time I tried it.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 15 Feb 2000 18:38:18 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: schema: pg_dump -s ipmeter (fwd)"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Peter Eisentraut wrote:\n\n> On Tue, 15 Feb 2000, The Hermit Hacker wrote:\n> \n> > I *really* hope you don't mean that you cn't pg_dump from v6.5.3 (what I\n> > was trying) and reload again into 7.0? :)\n> \n> No, what I meant was that you can't use pg_dump 7.0 to dump non-7.0\n> databases. At least it was like this last time I tried it.\n\nOkay, that I would expect ... :)\n\n\n",
"msg_date": "Tue, 15 Feb 2000 14:04:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: schema: pg_dump -s ipmeter (fwd)"
}
] |
[
{
"msg_contents": "Hi\n\nI'm trying to create a table that has only 8 fields or so. Two of the\nfields have CHECK's that are essentally \"LIKE 'this' OR LIKE 'that\".\nApparently, these restrictions add something to the tuple size, which I\nunderstand is set at 8192. How do I increase this limit? I read one\nposting that says it can be adjusted at compile time, but that it may be\ndangerous.\n\nMaybe there's a better way. How would you suggest allowing only 'AL',\n'AK', 'CA', etc. for a state field, for example? A c-function in a\nshared library?\n\nAnd where can I find the postgresql-hackers archive?\n\nThanks\n\nHowie\n\n",
"msg_date": "Tue, 15 Feb 2000 03:55:37 GMT",
"msg_from": "Howard Williams <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuple is too big"
}
] |
[
{
"msg_contents": "I've just committed the first cut at some \"join syntax\" improvements,\nand some other stuff including error message fixes and a start at\nPOSIX time zones. I'll start working on the date/time reunification\nnow, and Jan's gram.y shift/reduce problems after that.\n\nMy mail server seems to be down, so I'm off the air at the moment. If\nI'm missing something important, send mail to\[email protected] and I'll get it tomorrow...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 15 Feb 2000 03:59:15 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Great timing (not)"
}
] |
[
{
"msg_contents": "I've renamed the README for QNX4 to be consistant with the other\nplatform-specific FAQs. Let me know if that's a problem or if I've\ndone the wrong thing for this case...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 15 Feb 2000 05:31:56 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "README.qnx4 -> FAQ_QNX4"
},
{
"msg_contents": "I named it README.qnx4 because it is a more README like document than FAQ.\nThere are no questions and answers in it. BTW there is README.NT too.\n\nBut nevertheless feel free to name it as you like.\n\nAndreas Kardos\n\n> I've renamed the README for QNX4 to be consistant with the other\n> platform-specific FAQs. Let me know if that's a problem or if I've\n> done the wrong thing for this case...\n>\n> - Thomas\n>\n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n>\n> ************\n>\n\n",
"msg_date": "Tue, 15 Feb 2000 10:02:52 +0100",
"msg_from": "\"Kardos, Dr. Andreas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] README.qnx4 -> FAQ_QNX4"
}
] |
[
{
"msg_contents": "Michael Meskes wrote:\n\n> Is it possible to define a function in language 'C' that needs more\n> libraries to work? I've got a small example of a function that works like a\n> charm when run against from a binary. However if I put this function inside\n> the server and execute it I get\n>\n> ERROR: parser: parse error at or near \"\"\n>\n> Not exactly an error message that explains itself. :-)\n>\n> I have put my function into a shared library to load it, but the library\n> itself needs other libraries. Is this at all possible?\n>\n> Michael\n> --\n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n\nThat's odd. Would it be possible for you to provide your compiliation/link\nstatement as well as your CREATE FUNCTION statement? I've a host of functions\nwhich use external libaries that work as expected (on Linux), including doing\nsome pretty weird stuff.\n\nMike Mascari\n\n\n",
"msg_date": "Tue, 15 Feb 2000 03:42:33 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "Is it possible to define a function in language 'C' that needs more\nlibraries to work? I've got a small example of a function that works like a\ncharm when run against from a binary. However if I put this function inside\nthe server and execute it I get \n\nERROR: parser: parse error at or near \"\"\n\nNot exactly an error message that explains itself. :-)\n\nI have put my function into a shared library to load it, but the library\nitself needs other libraries. Is this at all possible?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 13:14:39 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "function question yet again"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> Is it possible to define a function in language 'C' that needs more\n> libraries to work? I've got a small example of a function that works like a\n> charm when run against from a binary. However if I put this function inside\n> the server and execute it I get\n> \n> ERROR: parser: parse error at or near \"\"\n> \n> Not exactly an error message that explains itself. :-)\n\nActually, it is very suggestive of quoting or escaping errors.\nPresumably your statement terminates somewhere where you would not\nexpect it. \n \n> I have put my function into a shared library to load it, but the library\n> itself needs other libraries. Is this at all possible?\n\nIf the system knows how to find it, absolutely. That is, whatever you\ndepend on will have to be in a system or pgsql library directory. \n\nSevo\n",
"msg_date": "Tue, 15 Feb 2000 15:14:57 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 04:58:49AM -0800, Alfred Perlstein wrote:\n> As a temporary hack you may want to try linking the shared object that you\n> are creating with the static versions of the third level libraries.\n\nGood idea. Unfottunately it didn't change anything. Using ldd on my shared\nlibrary no tells me it is statically linked. But the error message remains\nthe same.\n\nI tried to write a log file and it appears the function does correctly\nconnect to the backend but cannot execute the select. IDK however if a\nfunction inside the backend is allowed to create a connection. But if not\nhow can I send a query over libpq?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 15:24:30 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 03:42:33AM -0500, Mike Mascari wrote:\n> That's odd. Would it be possible for you to provide your compiliation/link\n> statement as well as your CREATE FUNCTION statement? I've a host of functions\n> which use external libaries that work as expected (on Linux), including doing\n> some pretty weird stuff.\n\nI attach both files. My intend was to get this thing going by using ecpg.\nI'm not sure anymore if this is at all possible.\n\nMichael\n\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!",
"msg_date": "Tue, 15 Feb 2000 15:26:20 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> I attach both files. My intend was to get this thing going by using\n> ecpg.\n> I'm not sure anymore if this is at all possible.\n> \n> Michael\n> \n> --\n> Michael Meskes | Go SF 49ers!\n> Th.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\n> Tel.: (+49) 2431/72651 | Use Debian GNU/Linux!\n> Email: [email protected] | Use PostgreSQL!\n\nWow. I'm not quite sure why it shouldn't work, but I've never\nreconnected on the server side through libpq. Instead, I've\nalways used the SPI interface sequence of:\n\nSPI_connect()\nSPI_exec()\nSPI_getvalue()\nSPI_finish()\n\nI think I've tried in the past to reconnect on the server side\nthrough libpq but it always resulted in a core dump of the\nrunning backend.\n\nSorry I'm no more help then that, \n\nMike Mascari\n",
"msg_date": "Tue, 15 Feb 2000 10:09:35 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 03:14:57PM +0100, Sevo Stille wrote:\n> Actually, it is very suggestive of quoting or escaping errors.\n> Presumably your statement terminates somewhere where you would not\n> expect it. \n\nOkay. But why does it work when run from outside the backend?\n\n> If the system knows how to find it, absolutely. That is, whatever you\n> depend on will have to be in a system or pgsql library directory. \n\nI made my shared lib not depend on any other lib, but nothing changes.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 20:21:21 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> Wow. I'm not quite sure why it shouldn't work, but I've never\n> reconnected on the server side through libpq. Instead, I've\n> always used the SPI interface sequence of:\n> SPI_connect()\n> SPI_exec()\n> SPI_getvalue()\n> SPI_finish()\n\nSPI is the recommended interface for server-side addon code, I think.\n\n> I think I've tried in the past to reconnect on the server side\n> through libpq but it always resulted in a core dump of the\n> running backend.\n\nBear in mind that libpq is not present in the backend. If you load\na library containing your code + libpq and then try to do something\nvia libpq, what will happen is that libpq will contact the postmaster,\nfire up a new backend, and send all your queries to that other backend.\nProbably not quite what you had in mind, and I could imagine it leading\nto deadlock problems against your own backend. (But I don't see why it\nwould cause the particular error Michael is complaining of; that still\nlooks like it might be a newline-versus-carriage-return kind of bug.)\n\nI believe that long ago, there was code in the backend that presented\na libpq-equivalent interface for queries originating from loaded\nlibraries, but that facility hasn't been maintained and probably\ndoesn't work at all anymore.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 16:29:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again "
},
{
"msg_contents": "On Tue, Feb 15, 2000 at 04:29:25PM -0500, Tom Lane wrote:\n> SPI is the recommended interface for server-side addon code, I think.\n\nOkay, I see. That means my try to use ECPG to create a function is not\nsupposed to work. Gheez, I would have liked it.\n\n> Bear in mind that libpq is not present in the backend. If you load\n> a library containing your code + libpq and then try to do something\n> via libpq, what will happen is that libpq will contact the postmaster,\n> fire up a new backend, and send all your queries to that other backend.\n> Probably not quite what you had in mind, and I could imagine it leading\n> to deadlock problems against your own backend. (But I don't see why it\n> would cause the particular error Michael is complaining of; that still\n> looks like it might be a newline-versus-carriage-return kind of bug.)\n\nRight. Since the function does only a select and noone else is working on\nthat database it shouldn't deadlock either.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 08:14:22 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "Michael Meskes wrote:\n> \n> On Tue, Feb 15, 2000 at 04:29:25PM -0500, Tom Lane wrote:\n> > SPI is the recommended interface for server-side addon code, I think.\n> \n> Okay, I see. That means my try to use ECPG to create a function is not\n> supposed to work. Gheez, I would have liked it.\n\nIt should be possible to create a SPI based libecpg.so! Though I don't\nknow if it is worth the effort (besides getting portable server code\n(WOW)!) Hmmm. I come to like this idea.\n\nI don't have any ideas on the deadlock, though.\n\nChristof\n\n",
"msg_date": "Thu, 17 Feb 2000 05:08:08 +0100",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
},
{
"msg_contents": "On Thu, Feb 17, 2000 at 05:08:08AM +0100, Christof Petig wrote:\n> It should be possible to create a SPI based libecpg.so! Though I don't\n> know if it is worth the effort (besides getting portable server code\n> (WOW)!) Hmmm. I come to like this idea.\n\nI really do like this idea to. But it is not realistic to get this one\nfinished before 7.0 though.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Thu, 17 Feb 2000 20:44:05 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function question yet again"
}
] |
[
{
"msg_contents": "We have to make sure this time all parser changes make it into ecpg's parser\nas well. Not like 6.5.3 where the backend accepted queries that ecpg didn't.\n\nRight now I'm very busy though, so please bear with me if I need a little\nmore time.\n\nI will try to keep as much up-to-date as possible.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 15 Feb 2000 10:53:34 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "parser changes"
},
{
"msg_contents": "> ... this time all parser changes make it into ecpg's parser\n\nDo you have a pretty good way to track changes in gram.y? Let me know\nif you want some help (though I won't be able to for a week or so).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 04:39:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes"
},
{
"msg_contents": "> > ... this time all parser changes make it into ecpg's parser\n> \n> Do you have a pretty good way to track changes in gram.y? Let me know\n> if you want some help (though I won't be able to for a week or so).\n\nI told him to keep a copy of the gram.y he uses, and merge changes from\nthe current version against the copy he has that matched the current\necpg.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 00:48:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> ... this time all parser changes make it into ecpg's parser\n>> \n>> Do you have a pretty good way to track changes in gram.y? Let me know\n>> if you want some help (though I won't be able to for a week or so).\n\n> I told him to keep a copy of the gram.y he uses, and merge changes from\n> the current version against the copy he has that matched the current\n> ecpg.\n\nIt seems to me that this whole business of tracking a hand-maintained\nmodified copy of gram.y is wrong. There ought to be a way for ecpg to\njust incorporate the backend grammar by reference, plus a few rules\non top for ecpg-specific constructs.\n\nI will freely admit that I have no idea what's standing in the way of\nthat ... but it seems like we ought to try to work towards the goal\nof not having a synchronization problem in the first place, rather\nthan spending effort on minimizing the synchronization error. Perhaps\nthat means tweaking or subdividing the backend grammar, but if so it'd\nbe effort well spent.\n\nIt's probably too late to do anything in this line for 7.0, but\nI suggest we think about it for future releases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 01:26:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes "
},
{
"msg_contents": "On Wed, Feb 16, 2000 at 01:26:22AM -0500, Tom Lane wrote:\n> >> Do you have a pretty good way to track changes in gram.y? Let me know\n> >> if you want some help (though I won't be able to for a week or so).\n\nRight now I'm up-to-date. But I have yet to finish my own todo for 7.0.\n\n> > I told him to keep a copy of the gram.y he uses, and merge changes from\n> > the current version against the copy he has that matched the current\n> > ecpg.\n\nThat's exactly how I do it. I run diff from time to tim and add the changes\nto my version by hand.\n\n> It seems to me that this whole business of tracking a hand-maintained\n> modified copy of gram.y is wrong. There ought to be a way for ecpg to\n> just incorporate the backend grammar by reference, plus a few rules\n> on top for ecpg-specific constructs.\n\nI would love this. But frankly I don't see how we can accomblish this. After\nall ECPG has to print out the statment word by word while the backend puts\nit into internal structure.\n\n> It's probably too late to do anything in this line for 7.0, but\n> I suggest we think about it for future releases.\n\nAny ideas anyone?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 08:12:04 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes"
},
{
"msg_contents": "Michael Meskes <[email protected]> writes:\n>> It seems to me that this whole business of tracking a hand-maintained\n>> modified copy of gram.y is wrong. There ought to be a way for ecpg to\n>> just incorporate the backend grammar by reference, plus a few rules\n>> on top for ecpg-specific constructs.\n\n> I would love this. But frankly I don't see how we can accomblish this. After\n> all ECPG has to print out the statment word by word while the backend puts\n> it into internal structure.\n> Any ideas anyone?\n\nAh, your point is that the actions have to be different even if the\nyacc grammar is the same. Hmm. A few ideas off the top of the head:\n\n1. Create a tool that strips the backend's actions out of gram.y\nand inserts ecpg's actions to produce a gram.y file for ecpg, all\nautomatically. This could probably be done with a perl script,\nalthough it might require tweaking gram.y to have a more uniform\nlayout convention for its actions. (You'd also need to figure out\nhow to identify which ecpg action code snippet to insert for each\nrule, which is not so easy.)\n\n2. Revise gram.y so that all it does is call functions that are\ndefined in another file; then ecpg and backend use the same gram.y,\nbut link it to different sets of action subroutines.\n\n3. Use the backend's gram.y as it stands, and rewrite ecpg to\nreverse-list the statements from the parsetree constructed by the\ngrammar. (You could steal most of the logic from ruleutils.c.)\n\nAside from the work involved, the major problem with any of these\napproaches is that practically any change in or around the backend's\ngram.y would instantly break ecpg; backend and ecpg source would\nhave to be maintained in strict synchrony or the system wouldn't\neven compile. Perhaps that would be good discipline ;-) but I doubt\nthere will be much enthusiasm for it among the backend developers.\nThe current way at least allows ecpg development to proceed at its\nown schedule.\n\nStill, it seems like you might want to think about building some\nkind of tool to help you with keeping the files in sync. For example,\nit'd probably be easier to diff ecpg and backend grammar files if\nyou made a script that just strips out the action parts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 10:07:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes "
},
{
"msg_contents": "> Still, it seems like you might want to think about building some\n> kind of tool to help you with keeping the files in sync. For example,\n> it'd probably be easier to diff ecpg and backend grammar files if\n> you made a script that just strips out the action parts.\n\nLet me know if you need help with that.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 11:03:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes"
},
{
"msg_contents": "On Wed, Feb 16, 2000 at 10:07:06AM -0500, Tom Lane wrote:\n> ...\n> Aside from the work involved, the major problem with any of these\n> approaches is that practically any change in or around the backend's\n> gram.y would instantly break ecpg; backend and ecpg source would\n\nThat means everyone who changes gram.y nowadays would then have to change\nthe corresponding ecpg function as well. Nice idea. :-)\n\n> there will be much enthusiasm for it among the backend developers.\n\nI'm afraid you're right on this one.\n\n> Still, it seems like you might want to think about building some\n> kind of tool to help you with keeping the files in sync. For example,\n> it'd probably be easier to diff ecpg and backend grammar files if\n> you made a script that just strips out the action parts.\n\nIt's not that difficult to read and apply a context diff by hand. After all\nthe changes are mostly moderately.\n\nmichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 17:34:27 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] parser changes"
}
] |
[
{
"msg_contents": "Hi,\n\njust tried latest CVS and got the problem after\ncompiling,installing.\nzen:~$ psql -l\nERROR: copyObject: don't know how to copy 1381319466\n\nDo I need initdb ? postamster started normally\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 15 Feb 2000 14:04:11 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: copyObject: don't know how to copy 1381319466"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> ERROR: copyObject: don't know how to copy 1381319466\n\n> Do I need initdb ? postamster started normally\n\nProbably --- I recall Thomas muttering yesterday that he needed an\ninitdb himself. FWIW, I got fairly clean regress results from a\nCVS pull of about 1AM (6AM GMT) this morning ... but I did initdb.\n\nNo catversion.h update to force initdb though. Naughty naughty,\nThomas...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Feb 2000 10:58:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: copyObject: don't know how to copy 1381319466 "
},
{
"msg_contents": "> > Do I need initdb ? postamster started normally\n> Probably --- I recall Thomas muttering yesterday that he needed an\n> initdb himself. FWIW, I got fairly clean regress results from a\n> CVS pull of about 1AM (6AM GMT) this morning ... but I did initdb.\n\nSorry, yes, and I should have announce it.\n\n> No catversion.h update to force initdb though. Naughty naughty,\n> Thomas...\n\nOops. And I should have known, having been stopped dead in the water\nonce or twice in the last few weeks resyncing from CVS and finding\nthat my work in progress required an initdb due to other changes. Oh,\nand finding that my parser was so broken that initdb wouldn't run. Fun\nfun fun ;)\n\nAnyway, istm to be a mixed blessing during pre-beta, but I didn't\nintentionally subvert it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 03:09:27 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: copyObject: don't know how to copy 1381319466"
}
] |
[
{
"msg_contents": "Hi,\n\nFor PostgreSQL We tend to use the Phrase\n\"Most Advanced Open Source RDBMS\" alot.\n\nWill this statement still hold true when/if\nInprise becomes open source ?\n\nJeff\n\n======================================================\nJeff MacDonald\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\n",
"msg_date": "Tue, 15 Feb 2000 10:42:10 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Most Advanced"
},
{
"msg_contents": "\"Jeff MacDonald \" wrote:\n> \n> Hi,\n> \n> For PostgreSQL We tend to use the Phrase\n> \"Most Advanced Open Source RDBMS\" alot.\n> \n> Will this statement still hold true when/if\n> Inprise becomes open source ?\n\nNow that's a good question. If we rephrase our tag to:\n\"Most Advanced Open Source ORDBMS\" it's not a problem.\n\nHowever, it really depends upon what apects in which we consider\nourselves to be the \"Most Advanced\" -- what is \"Advanced\" in that\ncontext? Are we advanced in terms of features, or are we advanced in\nterms of our development process? I believe we are advanced in both\nregards -- and we're certainly the most advanced when it comes to\nmaturity of development in an open source fashion -- advanced in age in\nthat context.\n\nUntil InterBase is released open source, it remains to be seen how\nadvanced of an open source database it will be.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 15 Feb 2000 10:50:17 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "> Hi,\n> \n> For PostgreSQL We tend to use the Phrase\n> \"Most Advanced Open Source RDBMS\" alot.\n> \n> Will this statement still hold true when/if\n> Inprise becomes open source ?\n> \n\nThat is my statement originally because we weren't getting good press. \nYes, I don't think Inprise will match us at all. Our Object-Relational\nfeatures will keep s as most advanced for the foreseeable future.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 10:50:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "> Now that's a good question. If we rephrase our tag to:\n> \"Most Advanced Open Source ORDBMS\" it's not a problem.\n> \n> However, it really depends upon what apects in which we consider\n> ourselves to be the \"Most Advanced\" -- what is \"Advanced\" in that\n> context? Are we advanced in terms of features, or are we advanced in\n> terms of our development process? I believe we are advanced in both\n> regards -- and we're certainly the most advanced when it comes to\n> maturity of development in an open source fashion -- advanced in age in\n> that context.\n> \n> Until InterBase is released open source, it remains to be seen how\n> advanced of an open source database it will be.\n\nIs Interbase any good? I never heard of them much. Sounds like it is a\nPC database like dbase, right? They don't scale very well.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 11:01:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > Until InterBase is released open source, it remains to be seen how\n> > advanced of an open source database it will be.\n \n> Is Interbase any good? I never heard of them much. Sounds like it is a\n> PC database like dbase, right? They don't scale very well.\n\nWell, rummaging around the interbase.com website, I found an\nintroductory whitepaper that lists their features. Check\nhttp://www.interbase.com/downloads/what_is_ib.pdf for more info.\n\nIt seems to be an interesting system. To summarize its features (going\nquickly against the PDF referenced above):\nClient-server architecture;\nSQL parser in server;\nServer side triggers;\nStored Procedures;\nUser-defined functions;\nEvent alerters (that notify clients of database changes);\nDeclarative Referential Integrity with cascading operations;\nDomains and contstraints extend SQL types;\nAutomatic two-phase commit to stabilize distributed mulit-database\ntransactions;\nCross-platform scalability and interoperability;\nSmall footprint (3MB disk for minimum, ~20MB for full install)\nUp to 150 concurrent clients;\nY2K correct;\nImplements entry level SQL-92, plus many intermediate level features and\nselected features from the full level;\nInterBase Corp has voting member status in the ANSI SQL standards\ncommittee, X3H2;\nSQL Roles for group-level security;\nSQL-92 syntax for inner and outer JOIN clauses;\nViews on tables and joins;\nSelect procedures (that return not a value, but a result set);\nFull transactional operation;\nMultiGenerational Architecture (basically the same as our MVCC);\nRow-level locking;\nMultiple concurrent transactions on a per-client basis -- each client\ncan have multiple concurrent transactions;\nDistributed transactions -- a single transaction can be open against\nmultiple databases, with a two-phase commit;\nBLOBs;\nArrays (implemented as structured BLOBs);\nBLOB filter functions (such as a JPEG to PNG translator);\nCost-analysis query optimization;\nOn Unix systems, the InterBase security can be integrated with OS\nsecurity;\nInternationalization support, including UNICODE;\nIntegration with Borland JBuilder;\nODBC client;\nAutomatic garbage collection -- no vacuum;\nNo preallocation of disk space required -- files up to 4GB in size, with\nexpansion through the use of secondary files (similar to our\nsegmentation);\nFull ACID compliance.\n\nThat's the short version.\n\nI don't see stuff like:\nAbility to use Tcl and Perl in stored procedural functions;\nObject Relational in nature;\nEndlessly extensible for types, languages, functions, etc. (I\nespecially like that one).\n\nAnd other features of PostgreSQL that we know and love. Nor do I see as\nmany supported architectures.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 15 Feb 2000 11:48:06 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Bruce Momjian wrote:\n\n> Is Interbase any good? I never heard of them much. Sounds like it is a\n> PC database like dbase, right? They don't scale very well.\n\n\"InterBase is an enterprise-level database system used by firms and\nagencies like Nokia, MCI, Northern Telecom, NASA, the US Army, and Boeing.\nDespite its speed and capabilities, it has been a well-kept secret and\nholds an insignificant share of the relational database market. Growth of\nthat market has slowed, reducing chance for InterBase to increase its\nshare as a proprietary product.\"\n\nAlthough they do stand in the tradition of dBase and Paradox, they do\nappear to be a serious product. This URL might give you an idea what kind\nof SQL (and beyond) they support.\n\n<http://www.interbase.com/products/dsqlsyntax.html>\n\nAlso, they seem to have an edge (against us) in the areas of logging\n(journaling) and distributed stuff. Their client languages seem to\nconcentrate around Delphi, C++, and Java.\n\nI am just wondering how/whether they will be able to enlist any outside\ndevelopers in significant masses. (see also Mozilla)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Tue, 15 Feb 2000 18:36:20 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "Lamar Owen wrote:\n> \n> Bruce Momjian wrote:\n> > > Until InterBase is released open source, it remains to be seen how\n> > > advanced of an open source database it will be.\n> \n> > Is Interbase any good? I never heard of them much. Sounds like it is a\n> > PC database like dbase, right? They don't scale very well.\n\nIIRC it was missing shared cache between backends. \n \n> It seems to be an interesting system. To summarize its features (going\n> quickly against the PDF referenced above):\n\nI try putting a + or - based on weather PG has it (please correct me)\n\n+> Client-server architecture;\n+> SQL parser in server;\n+> Server side triggers;\n+> Stored Procedures;\n+> User-defined functions;\n+> Event alerters (that notify clients of database changes);\n\nActually or LISTEN/NOTIFY would could use some improvement, it would be \nmuch more powerful if it allowed even a single argument to be passed.\n\n+> Declarative Referential Integrity with cascading operations;\n\nWill be in 7.0 ?\n\n-> Domains and contstraints extend SQL types;\n?> Automatic two-phase commit to stabilize distributed mulit-database\n transactions;\n+> Cross-platform scalability and interoperability;\n+> Small footprint (3MB disk for minimum, ~20MB for full install)\n\nPG should be about the same size\n\n+> Up to 150 concurrent clients;\n\nWhat is the upper limit for PG ?\n\n+> Y2K correct;\n+> Implements entry level SQL-92, plus many intermediate level features and\n selected features from the full level;\n-> InterBase Corp has voting member status in the ANSI SQL standards\n committee, X3H2;\n\nIs this the bunch of guys we often fondly remember for their SQL3 standard ?\n\n-> SQL Roles for group-level security;\n\n+> SQL-92 syntax for inner and outer JOIN clauses;\n\nWill be in 7.0 ?\n\n+> Views on tables and joins;\n-> Select procedures (that return not a value, but a result set);\n\nThis requires a rewrite of the pl function API\n\n+> Full transactional operation;\n+> MultiGenerational Architecture (basically the same as our MVCC);\n?> Row-level locking;\n\nHow are we doing here ?\n\n?> Multiple concurrent transactions on a per-client basis -- each client\n can have multiple concurrent transactions;\n\nIf client==connection, then we don't have it, if we opened a connection per\ntrx we do\n\n-> Distributed transactions -- a single transaction can be open against\n multiple databases, with a two-phase commit;\n\nSupport for multi-db is generally weak in PG. A single connction can work only \nwith one db at a time\n\n+> BLOBs;\n\nWe have LOs, but the implementation is nut usable for more than a few on \nmost UNIX filesystems (we have one LO per file, all in the same directory with \neverything else)\n\n+> Arrays (implemented as structured BLOBs);\n\nBut nut implemented as structured BLOBS ;)\n\n+> BLOB filter functions (such as a JPEG to PNG translator);\n\nCould be done easily, but not included in distribution at least.\n\n+> Cost-analysis query optimization;\n+> On Unix systems, the InterBase security can be integrated with OS\n security;\n+> Internationalization support, including UNICODE;\n\nDo we have UNICODE (or just several other MB charsets)?\n\n-> Integration with Borland JBuilder;\n\nNo intgration but can be used from it\n\n+> ODBC client;\n-> Automatic garbage collection -- no vacuum;\n\nImplementing it to be _fully_ automatic would make it very hard to \nre-introduce time travel.\n\nWe have it semi-automatic using psql -c \"vacuum;\" in cron ;-p\n\n+> No preallocation of disk space required -- files up to 4GB in size, with\n expansion through the use of secondary files (similar to our\n segmentation);\n+> Full ACID compliance.\n> \n> That's the short version.\n> \n> I don't see stuff like:\n+> Ability to use Tcl and Perl in stored procedural functions;\n-> Object Relational in nature;\n\nMaybe we should rephrase it to \"Object Relational by ancestry\".\nWe have very few OR features currently in working order, and \nprobably won't before 7.1. Chris Bitmead has patches for making \ninherited tables work right (for SELECT,DELETE,UPDATE), but \nthey won't probably be included in 7.0 as they change the behaviour \nof the only statment (SELECT) that is currently working to some extent\nand some of the core developers seem to be dependent on the old \nbehaviour, i.e using inheritance as a shortcut for including the \nsame set of columns in an unrelated table. \nAlso people were set back by SQL3 standard which pg should (?) follow \nto some extent at least, but which is incomprehensible when read \ndirectly and which can only be understood through the works \nof apostles ;)\n\n+> Endlessly extensible for types, languages, functions, etc. (I\n especially like that one).\n\nOtoh, they have implemented DOMAINS, which allow much of simpler types to \nbe done at SQL level. They won't probably be indexable.\n\n-------------------\nHannu\n",
"msg_date": "Tue, 15 Feb 2000 20:04:55 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "> On Tue, 15 Feb 2000, Bruce Momjian wrote:\n> \n> > Is Interbase any good? I never heard of them much. Sounds like it is a\n> > PC database like dbase, right? They don't scale very well.\n> \n> \"InterBase is an enterprise-level database system used by firms and\n> agencies like Nokia, MCI, Northern Telecom, NASA, the US Army, and Boeing.\n> Despite its speed and capabilities, it has been a well-kept secret and\n> holds an insignificant share of the relational database market. Growth of\n> that market has slowed, reducing chance for InterBase to increase its\n> share as a proprietary product.\"\n> \n> Although they do stand in the tradition of dBase and Paradox, they do\n> appear to be a serious product. This URL might give you an idea what kind\n> of SQL (and beyond) they support.\n> \n> <http://www.interbase.com/products/dsqlsyntax.html>\n> \n> Also, they seem to have an edge (against us) in the areas of logging\n> (journaling) and distributed stuff. Their client languages seem to\n> concentrate around Delphi, C++, and Java.\n> \n> I am just wondering how/whether they will be able to enlist any outside\n> developers in significant masses. (see also Mozilla)\n\nYes, that is a key question. I know Solaris got criticized about over\ntheir new \"Solaris\" open-source license, and I heard the\nMonzilla/Netscape code was so ugly that the open source effort is going\nvery slowly.\n\nHonestly, a big part of success is atmosphere and code structure. \nWithout both of those, it is pretty slow going. It is hard to imagine\nhow anyone would _new_ would start working on Interbase rather than\nPostgreSQL because of our good reputation.\n\nAlso, Corel bough Inprise/Borland, so there is no way of knowing what\nwill happen with Interbase now. I just heard that the new WordPerfect\nOffice will run under WINE(yuck) rather than native Linux/Unix. That is\nquite bad and a clear failure for Corel IMHO. Corel doesn't seem to be\nvery good at enlisting assistance in open source projects. The\ndeveloped their _own_ version of WINE to get Word Perfect Office out the\ndoor, and they say they will merge their changes in \"later\" to main WINE\ndistribution. Their WINE source is accessible, though.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 15:18:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Bruce Momjian wrote:\n\n> Also, Corel bough Inprise/Borland, so there is no way of knowing what\n> will happen with Interbase now. I just heard that the new WordPerfect\n> Office will run under WINE(yuck) rather than native Linux/Unix. That\n> is quite bad and a clear failure for Corel IMHO. Corel doesn't seem\n> to be very good at enlisting assistance in open source projects. The\n> developed their _own_ version of WINE to get Word Perfect Office out\n> the door, and they say they will merge their changes in \"later\" to\n> main WINE distribution. Their WINE source is accessible, though.\n\nActually, I think you might be looking at this wrong ... figure that Corel\nis putting resources into making WINE a viable \"engine\" to running\nMicro$loth applications ... WINE is open source. \n\nNow, which is better/easier? Re-code Wordperfect Office (one app) to run\nnatively, or improve an open source application so that Linux/Unix can run\nany existing Micro$loth product? Which is cheaper in the long run?\n\nIf they were starting WordPerfect from scratch, okay ... but how many\nhundreds of thousands of lines of Windoze specific code is in\nWP-Office? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Feb 2000 17:12:17 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "> Actually, I think you might be looking at this wrong ... figure that Corel\n> is putting resources into making WINE a viable \"engine\" to running\n> Micro$loth applications ... WINE is open source. \n> \n> Now, which is better/easier? Re-code Wordperfect Office (one app) to run\n> natively, or improve an open source application so that Linux/Unix can run\n> any existing Micro$loth product? Which is cheaper in the long run?\n> \n> If they were starting WordPerfect from scratch, okay ... but how many\n> hundreds of thousands of lines of Windoze specific code is in\n> WP-Office? :)\n\nI have never been very confident about emulation of any form. Also, if\nCorel has Inprise and Corel Linux, you would think it would be worth\nmaking the port to a _real_ operating system.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 16:18:31 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Actually, I think you might be looking at this wrong ... figure that Corel\n> > is putting resources into making WINE a viable \"engine\" to running\n> > Micro$loth applications ... WINE is open source.\n> >\n> > Now, which is better/easier? Re-code Wordperfect Office (one app) to run\n> > natively, or improve an open source application so that Linux/Unix can run\n> > any existing Micro$loth product? Which is cheaper in the long run?\n> >\n> > If they were starting WordPerfect from scratch, okay ... but how many\n> > hundreds of thousands of lines of Windoze specific code is in\n> > WP-Office? :)\n> \n> I have never been very confident about emulation of any form. Also, if\n> Corel has Inprise and Corel Linux, you would think it would be worth\n> making the port to a _real_ operating system.\n\nAFAIIC they are just using WINE as their cross-platform toolkit, same as\nMozilla \ndoes with theirs XPCOM (or whatever it's called;).\nI don't see any of the GUI toolkits as basically better than others (though I \nhave used lately wxPython).\nAnd cross-platform is important nowadays - even in my small company there are \ndevelopers using Linux,BeOS,Win32 and Macs and it is counterproductive if we \nhave to use some important tool that isnt available for some platform or can't \nbe run remotely (over http/html or X11)\n\n---------------\nHannu\n",
"msg_date": "Wed, 16 Feb 2000 00:04:54 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "* Jeff MacDonald <[email protected]> <[email protected]> [000215 07:33] wrote:\n> Hi,\n> \n> For PostgreSQL We tend to use the Phrase\n> \"Most Advanced Open Source RDBMS\" alot.\n> \n> Will this statement still hold true when/if\n> Inprise becomes open source ?\n\nDepending on the license it could become \n\"Most Advanced Really Open Source RDBMS\"\n\nAnd who says that it will actually be more advanced?\n\n-Alfred\n",
"msg_date": "Tue, 15 Feb 2000 14:23:53 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "On Wed, Feb 16, 2000 at 12:04:54AM +0200, Hannu Krosing wrote:\n> Bruce Momjian wrote:\n> > \n> > > Actually, I think you might be looking at this wrong ... figure that Corel\n> > > is putting resources into making WINE a viable \"engine\" to running\n> > > Micro$loth applications ... WINE is open source.\n> > >\n> > > Now, which is better/easier? Re-code Wordperfect Office (one app) to run\n> > > natively, or improve an open source application so that Linux/Unix can run\n> > > any existing Micro$loth product? Which is cheaper in the long run?\n> > >\n> > > If they were starting WordPerfect from scratch, okay ... but how many\n> > > hundreds of thousands of lines of Windoze specific code is in\n> > > WP-Office? :)\n> > \n> > I have never been very confident about emulation of any form. Also, if\n> > Corel has Inprise and Corel Linux, you would think it would be worth\n> > making the port to a _real_ operating system.\n> \n> AFAIIC they are just using WINE as their cross-platform toolkit, same as\n> Mozilla \n> does with theirs XPCOM (or whatever it's called;).\n> I don't see any of the GUI toolkits as basically better than others (though I \n> have used lately wxPython).\n> And cross-platform is important nowadays - even in my small company there are \n> developers using Linux,BeOS,Win32 and Macs and it is counterproductive if we \n> have to use some important tool that isnt available for some platform or can't \n> be run remotely (over http/html or X11)\n\nI've been following WINE development longer than PostgreSQL, so I\nthink I should comment on this. Hannu's exactly right: WINE Is Not\nan Emulator, it's an implementation of the Win32 API on top of Unix/X\n(well, mostly Linux, but they do occasionally get someone testing on\n*BSD). Given that, there _is_ a second component: implementation of\nthe Win32 ABI. This is the wine executable that sometimes gets called\nthe \"emulator\". The API implementation requires recompiling your Win32\ntargeted code - this is a 'winlib' app. The wine executable knows all\nabout loading Windows format binaries, so you can run Windows _binaries_\ndirectly. Such a clean distinction didn't really exist, at first. Since\n_most_ Windows programs that people wanted to run where binary only,\nand the idea that companies might actually recompile their code for the\nLinux market (what market?) was somewhat laughed at, the WINE project\nstarted as an ABI emulation, and only fairly recently has restructured,\nto include both. The eventual goal is the the wine executable will just\nbe another winlib app, just like any other, it just knows how to read\na Windows binary, and do any fix-up needed.\n\nAs to Corel's work with WINE: their developers (and their contractors)\nhave had their own, until quite recently private, tree, but they have\nbeen pushing patches out to the public tree on a regular basis, and\nparticipating in design discussions on the public mailing lists. The\nrecent opening of their tree seems to me to have come about to alleviate\nthe problem of needing to push patches out, as release deadline pressures\nneared. In fact, that was explicitly mentioned by one of their developers,\nwho invited everyone to generate patch sets and submit them to the open\ntree. The use of WINE as opposed to 'native' I read as \"Win32 binary\"\nrather than winlib, ELF binary.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 15 Feb 2000 16:51:27 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "On Tue, 15 Feb 2000, Bruce Momjian wrote:\n\n> > Actually, I think you might be looking at this wrong ... figure that Corel\n> > is putting resources into making WINE a viable \"engine\" to running\n> > Micro$loth applications ... WINE is open source. \n> > \n> > Now, which is better/easier? Re-code Wordperfect Office (one app) to run\n> > natively, or improve an open source application so that Linux/Unix can run\n> > any existing Micro$loth product? Which is cheaper in the long run?\n> > \n> > If they were starting WordPerfect from scratch, okay ... but how many\n> > hundreds of thousands of lines of Windoze specific code is in\n> > WP-Office? :)\n> \n> I have never been very confident about emulation of any form. Also, if\n> Corel has Inprise and Corel Linux, you would think it would be worth\n> making the port to a _real_ operating system.\n\nI've gotten 4 Windoze users at work converted over to FreeBSD over the\npast month or so, the latest one being as a result of VMWare ... now they\ncan run all their Windoze programs that they require without having to\ndual-boot into Windoze when they need to ...\n\nIf the emulation is done well enough, the end result is a much broader set\nof applications while still running a *good* operating system ... the blue\nscreen of death doesn't require rebooting a whole computer :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 15 Feb 2000 18:55:21 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "> have had their own, until quite recently private, tree, but they have\n> been pushing patches out to the public tree on a regular basis, and\n> participating in design discussions on the public mailing lists. The\n> recent opening of their tree seems to me to have come about to alleviate\n> the problem of needing to push patches out, as release deadline pressures\n> neared. In fact, that was explicitly mentioned by one of their developers,\n> who invited everyone to generate patch sets and submit them to the open\n> tree. The use of WINE as opposed to 'native' I read as \"Win32 binary\"\n> rather than winlib, ELF binary.\n\nQuite interesting. I can see a Win32 compatible library as quite handy.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 18:01:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Most Advanced"
},
{
"msg_contents": "Hi Guys,\n\nTim Dyck asked me to forward this on to the list.\n\n*****************************\n\nFYI, the story is available online at:\n\nhttp://www.zdnet.com/pcweek/stories/news/0,4153,2436153,00.html\n\nAs I think I have mentioned, I would like to review PostgreSQL 7.0 when it\ngoes gold, so if someone could let me know when it is available, that\nwould be much appreciated.\n\n(i can handle contacting him when it comes out.)\n\nRegards,\nTim Dyck\nSenior Analyst\nPC Week Labs\n\n\n\n\n",
"msg_date": "Thu, 17 Feb 2000 10:10:35 -0400 (AST)",
"msg_from": "\"Jeff MacDonald <[email protected]>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PC Week PostgreSQL benchmark results posted online (fwd)"
},
{
"msg_contents": "> As I think I have mentioned, I would like to review PostgreSQL 7.0 when it\n> goes gold, so if someone could let me know when it is available, that\n> would be much appreciated.\n\nA review of 7.0.1 would be a better choice ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 14:59:40 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week PostgreSQL benchmark results posted online\n (fwd)"
},
{
"msg_contents": "> Tim Dyck asked me to forward this on to the list.\n>\n> *****************************\n>\n> FYI, the story is available online at:\n>\n> http://www.zdnet.com/pcweek/stories/news/0,4153,2436153,00.html\n>\n> As I think I have mentioned, I would like to review PostgreSQL 7.0 when it\n> goes gold, so if someone could let me know when it is available, that\n> would be much appreciated.\n\n And that'll be an interesting one. Just looked into the MSSQL\n 7.0 docs today. They don't have referential actions! So\n something like ON UPDATE CASCADE/... must be implemented the\n hard way as triggers. Need to look it up once again tomorrow\n to see if constraints can be deferred or not.\n\n On this detail, we already left at least one (and definitely\n not the smallest) commercial database behind on the way to\n SQL3.\n\n Can someone take a look at Interbase, Oracle and others about\n it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 17 Feb 2000 23:10:01 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week PostgreSQL benchmark results posted online\n (fwd)"
},
{
"msg_contents": "\n\nJan Wieck wrote:\n\n> > Tim Dyck asked me to forward this on to the list.\n> >\n> > *****************************\n> >\n> > FYI, the story is available online at:\n> >\n> > http://www.zdnet.com/pcweek/stories/news/0,4153,2436153,00.html\n> >\n> > As I think I have mentioned, I would like to review PostgreSQL 7.0 when it\n> > goes gold, so if someone could let me know when it is available, that\n> > would be much appreciated.\n>\n> And that'll be an interesting one. Just looked into the MSSQL\n> 7.0 docs today. They don't have referential actions! So\n> something like ON UPDATE CASCADE/... must be implemented the\n> hard way as triggers. Need to look it up once again tomorrow\n> to see if constraints can be deferred or not.\n>\n> On this detail, we already left at least one (and definitely\n> not the smallest) commercial database behind on the way to\n> SQL3.\n>\n> Can someone take a look at Interbase, Oracle and others about\n> it?\n>\n> Jan\n>\n> --\n>\n\nBorland Interbase 4.0 syntax:\n\ncontraint_def:\n{ PRIMARY KEY | UNIQUE (col [, col...] )\n| FOREIGN KEY (col [, col...]) REFERENCES other_table\n| CHECK (<search_condition>)\n}\n\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Fri, 18 Feb 2000 14:47:26 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PC Week PostgreSQL benchmark results posted online\n (fwd)"
}
] |
[
{
"msg_contents": "Gibt es in dieser Newsgroup auch jemanden der Deutsch spricht?????\n\n\n\n",
"msg_date": "Tue, 15 Feb 2000 18:26:20 +0100",
"msg_from": "\"Andreas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Germany"
}
] |
[
{
"msg_contents": "Monday would be good for me...\n\n[Now to see what else is going to go wrong... :-(]\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Tuesday, February 15, 2000 1:20 AM\nTo: Tom Lane\nCc: Lamar Owen; [email protected]\nSubject: Re: [HACKERS] Release on the 15th? \n\n\nOn Mon, 14 Feb 2000, Tom Lane wrote:\n\n> Lamar Owen <[email protected]> writes:\n> > Are we still 'go' for a beta release the 15th?\n> \n> Um ... I'm not ready ...\n> \n> Couple more days, Marc?\n\nSay Monday?\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n",
"msg_date": "Tue, 15 Feb 2000 18:12:42 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Release on the 15th? "
}
] |
[
{
"msg_contents": "Something someone pointed out to me recently is that when comparing\ndatabases, you have to look at the speed of improvement as well as\ncurrent features. While we are behind some commercial databases, our\nimprovement speed is far better than theirs, so we will overtake them\nsometime in the future.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 16:09:03 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interbase"
},
{
"msg_contents": "Bruce Momjian wrote:\n> While we are behind some commercial databases, our\n> improvement speed is far better than theirs, so we will overtake them\n> sometime in the future.\n\nAnd this, Bruce, is where the PostgreSQL RDBMS truly is the \"Most\nAdvanced Open Source RDBMS.\"\n\nIn chronology, 6.1.1 wasn't too long ago. In features, 6.1.1 was _eons_\nago. I know -- I used 6.1.1. And 6.2.1. And 6.3.2. Now 6.5.3. Soon\n7.0. (I never did put 6.4.2 in production -- the RPM lag factor meant\nthat 6.5beta was available while the shipping RPM's were still 6.3.2 for\nRedHat users.)\n\nAnd now we have developers that are _very_ familiar with the guts of\nPostgreSQL -- it will take people at least a year or more to get up to\nspeed on Interbase.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 15 Feb 2000 16:37:25 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Interbase"
},
{
"msg_contents": "> In chronology, 6.1.1 wasn't too long ago. In features, 6.1.1 was _eons_\n> ago. I know -- I used 6.1.1. And 6.2.1. And 6.3.2. Now 6.5.3. Soon\n> 7.0. (I never did put 6.4.2 in production -- the RPM lag factor meant\n> that 6.5beta was available while the shipping RPM's were still 6.3.2 for\n> RedHat users.)\n> \n> And now we have developers that are _very_ familiar with the guts of\n> PostgreSQL -- it will take people at least a year or more to get up to\n> speed on Interbase.\n\nI have seen the MySQL code and it is clearly very hard to understand. I\ncan't imagine how many people _don't_ get involved because of that, and\nthe license.\n\nDoes MySQL have the team size we have? I don't see them progressing at\nour speed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 15 Feb 2000 16:57:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Interbase"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Does MySQL have the team size we have? I don't see them \n> progressing at our speed.\n\nMySQL is mostly written by one guy who knows it all inside out. That has\nits advantages and disadvantages.\n",
"msg_date": "Wed, 16 Feb 2000 19:58:24 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Interbase"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > Does MySQL have the team size we have? I don't see them \n> > progressing at our speed.\n> \n> MySQL is mostly written by one guy who knows it all inside out. That has\n> its advantages and disadvantages.\n\nThat's what I thought. That will keep us ahead of them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 04:11:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Interbase"
}
] |
[
{
"msg_contents": "Anyone see this little news item? It showed up in my\npaper copy of InfoWorld today.\n\nhttp://www.infoworld.com/articles/hn/xml/00/02/14/000214hnpatent.xml\n\nIt caught my eye, and I'm forwarding it here, since Informix's Universal\nDB now incorporates the old Illustra code. Since stepping on patents\nhas always been one of the open source software nightmare scenarios,\nit's be nice to know which patents are involved.\n\n(BTW, I was checking out Informix's DataBlade technology: turns out it's\njust like pgsql's user extensible types and functions, with pretty PR\nand training tools - and a little better integration packaging. The basic\nAPI is so similar, I woudn't be suprised if it's a direct descendant)\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n\n",
"msg_date": "Tue, 15 Feb 2000 18:39:57 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "IBM sues Informix over DB patents"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> Anyone see this little news item? It showed up in my\n> paper copy of InfoWorld today.\n> \n> http://www.infoworld.com/articles/hn/xml/00/02/14/000214hnpatent.xml\n> \n> It caught my eye, and I'm forwarding it here, since Informix's Universal\n> DB now incorporates the old Illustra code. Since stepping on patents\n> has always been one of the open source software nightmare scenarios,\n> it's be nice to know which patents are involved.\n> \n> (BTW, I was checking out Informix's DataBlade technology: turns out it's\n> just like pgsql's user extensible types and functions, with pretty PR\n> and training tools - and a little better integration packaging. The basic\n> API is so similar, I woudn't be suprised if it's a direct descendant)\n\nIt very likely is, as Illustra (which introduced the name DataBlades) was \na direct descendant of old Postgres 4.2. \n\nThey moved independantly from postquel to SQL but the engine they \nstarted from was the same.\n\n------------------\nHannu\n",
"msg_date": "Wed, 16 Feb 2000 08:44:15 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] IBM sues Informix over DB patents"
}
] |
[
{
"msg_contents": "Forgive me if you've seen this, but thought it'd be interesting to\nsome given the recent discussion of Interbase...\n\nhttp://www.vnunet.com/News/106540",
"msg_date": "Tue, 15 Feb 2000 22:05:24 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "vnunet.com Inprise/Borland spins off Interbase database as separate\n\tcompany"
},
{
"msg_contents": "\nThis comment caught my eye...\n\n> But another joint founder, Don DePalma, has\n> expressed concern at the open source plan. \n> Because of the complexity of Interbase, he \n> claimed: \"You really don't want non-experts \n> mucking about with it. \n\nGee, looks like Interbase internals are too\nhard for us dumb geeks (because all open-source\nprogrammers are uni-students right?). Better\nstick to postgresql I guess :-)\n",
"msg_date": "Wed, 16 Feb 2000 20:21:05 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vnunet.com Inprise/Borland spins off Interbase database\n\tas separate company"
}
] |
[
{
"msg_contents": "Hi!\n\n The article\nhttp://www.zdnet.com/enterprise/stories/linux/news/0,6423,2436153,00.html\n mentions Interbase will be released under Mozilla Public License 1.1.\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Wed, 16 Feb 2000 09:02:51 +0000 (GMT)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres meets InterBase (ZDNet)"
},
{
"msg_contents": "Oleg Broytmann wrote:\n> \n> Hi!\n> \n> The article\n> http://www.zdnet.com/enterprise/stories/linux/news/0,6423,2436153,00.html\n> mentions Interbase will be released under Mozilla Public License 1.1.\n\nFor another good read, also check\nhttp://www.zdnet.com/enterprise/stories/linux/news/0,6423,2436155,00.html\n\nExcerpting:\n\"However, PC Week Labs cautions that MySQL shouldn't be compared with\nhigher-end databases, such as InterBase or PostgreSQL (see Tech\nAnalysis). It's a fundamentally different product with different design\ngoals.\"\n\nThe author then goes on to compare MySQL to Paradox or FoxPro.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 16 Feb 2000 10:57:03 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres meets InterBase (ZDNet)"
},
{
"msg_contents": "> Oleg Broytmann wrote:\n> > \n> > Hi!\n> > \n> > The article\n> > http://www.zdnet.com/enterprise/stories/linux/news/0,6423,2436153,00.html\n> > mentions Interbase will be released under Mozilla Public License 1.1.\n> \n> For another good read, also check\n> http://www.zdnet.com/enterprise/stories/linux/news/0,6423,2436155,00.html\n> \n> Excerpting:\n> \"However, PC Week Labs cautions that MySQL shouldn't be compared with\n> higher-end databases, such as InterBase or PostgreSQL (see Tech\n> Analysis). It's a fundamentally different product with different design\n> goals.\"\n> \n> The author then goes on to compare MySQL to Paradox or FoxPro.\n> \n\nWe should get a link to this on our web page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 11:52:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres meets InterBase (ZDNet)"
}
] |
[
{
"msg_contents": "I just looked into the code and found that the file pgsql/common.c includes\ninterfaces/libpq/c.h instead of include/c.h. I changed the CFLAGS setting in\nthe Makefile to append -I$(LIBPQDIR) instead of insert it and it compiles\nfine.\n\nBTW the file common.c includes c.h twice, directly and via common.h.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 11:11:27 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql compile problems"
},
{
"msg_contents": "> I just looked into the code and found that the file pgsql/common.c includes\n> interfaces/libpq/c.h instead of include/c.h. I changed the CFLAGS setting in\n> the Makefile to append -I$(LIBPQDIR) instead of insert it and it compiles\n> fine.\n> \n> BTW the file common.c includes c.h twice, directly and via common.h.\n\nI have cleaned up include file use in psql. It now uses the standard\npostgres.h includes and does not use redundant includes as much.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 08:08:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql compile problems"
},
{
"msg_contents": "On Wed, 16 Feb 2000, Bruce Momjian wrote:\n\n> > I just looked into the code and found that the file pgsql/common.c includes\n> > interfaces/libpq/c.h instead of include/c.h. I changed the CFLAGS setting in\n> > the Makefile to append -I$(LIBPQDIR) instead of insert it and it compiles\n> > fine.\n> > \n> > BTW the file common.c includes c.h twice, directly and via common.h.\n> \n> I have cleaned up include file use in psql. It now uses the standard\n> postgres.h includes and does not use redundant includes as much.\n\nActually, the point of including c.h (plus postgres_ext.h) rather than\npostgres.h was to not have access to the backend internal stuff, so as to\nkeep the separation clean. But it doesn't really matter to me. Also, on my\nsystem at least, there were no redundant includes. I actually went through\neach library call and put in exactly the includes the man page mentioned.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 16 Feb 2000 17:13:44 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql compile problems"
},
{
"msg_contents": "> Actually, the point of including c.h (plus postgres_ext.h) rather than\n> postgres.h was to not have access to the backend internal stuff, so as to\n> keep the separation clean. But it doesn't really matter to me. Also, on my\n> system at least, there were no redundant includes. I actually went through\n> each library call and put in exactly the includes the man page mentioned.\n\nYes, I suspected that was your purpose. I couldn't find any other areas\nwhere c.h was used, so I figured I may as well make it standard, and if\nwe want to make it separate, we would have to do most of bin and\ninterfaces, and that would be a separate job.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 11:39:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql compile problems"
}
] |
[
{
"msg_contents": "Why isn't this casted automatically?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 11:15:14 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: Unable to identify an operator '=' for types 'numeric' and\n\t'float8'"
},
{
"msg_contents": "> Why isn't this casted automatically?\n\nOversight. Will look at it.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 14:14:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Why isn't this casted automatically?\n\n> Oversight. Will look at it.\n\nI believe it's the problem I complained of before: TypeCategory()\ndoesn't think NUMERIC is a numeric type...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 09:30:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> One hesitation I have is the performance hit in mixing FLOAT and\n> NUMERIC; I (probably) don't want to make NUMERIC the \"best\" numeric\n> type, since it is potentially so slow.\n\nI concur --- I'd be inclined to leave FLOAT8 as the top of the\nhierarchy. But NUMERIC could be stuck in there between int and float,\nno? (int-vs-numeric ops certainly must be promoted to numeric...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 09:46:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "> >> Why isn't this casted automatically?\n> > Oversight. Will look at it.\n> I believe it's the problem I complained of before: TypeCategory()\n> doesn't think NUMERIC is a numeric type...\n\nRight. The \"oversight\" is a long standing one, and somewhat\nintentional.\n\nOne hesitation I have is the performance hit in mixing FLOAT and\nNUMERIC; I (probably) don't want to make NUMERIC the \"best\" numeric\ntype, since it is potentially so slow. I'll have to look to see what\nhappens in INT/FLOAT mixed arithmetic and make sure it doesn't end up\ndoing it in NUMERIC instead.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 14:47:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > One hesitation I have is the performance hit in mixing FLOAT and\n> > NUMERIC; I (probably) don't want to make NUMERIC the \"best\" numeric\n> > type, since it is potentially so slow.\n>\n> I concur --- I'd be inclined to leave FLOAT8 as the top of the\n> hierarchy. But NUMERIC could be stuck in there between int and float,\n> no? (int-vs-numeric ops certainly must be promoted to numeric...)\n\nIf you cast NUMERIC to FLOAT8, then you would loose precision and it would\nbe counterintuitive type promotion (at least for a C programmer). If someone\nwants speed over correctness, he can always explicitly cast NUMERIC to\nFLOAT8. Seems like \"correct\" should take precedence over \"fast\", at least as\nlong as there is a way to do \"fast\".\n\nGene Sokolov.\n\n\n",
"msg_date": "Wed, 16 Feb 2000 19:00:05 +0300",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n>\n> Thomas Lockhart <[email protected]> writes:\n> > One hesitation I have is the performance hit in mixing FLOAT and\n> > NUMERIC; I (probably) don't want to make NUMERIC the \"best\" numeric\n> > type, since it is potentially so slow.\n>\n> I concur --- I'd be inclined to leave FLOAT8 as the top of the\n> hierarchy. But NUMERIC could be stuck in there between int and float,\n> no? (int-vs-numeric ops certainly must be promoted to numeric...)\n>\n\nIs this topic related to the fact that 1.1 is an FLOAT8 constant in\nPostgreSQL ?\nI've not understood at all why it's OK.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 17 Feb 2000 11:37:23 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "[Charset iso-2022-jp unsupported, skipping...]\n>:-{\n\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> >\n> > Thomas Lockhart <[email protected]> writes:\n> > > One hesitation I have is the performance hit in mixing FLOAT and\n> > > NUMERIC; I (probably) don't want to make NUMERIC the \"best\" numeric\n> > > type, since it is potentially so slow.\n> >\n> > I concur --- I'd be inclined to leave FLOAT8 as the top of the\n> > hierarchy. But NUMERIC could be stuck in there between int and float,\n> > no? (int-vs-numeric ops certainly must be promoted to numeric...)\n> >\n>\n> Is this topic related to the fact that 1.1 is an FLOAT8 constant in\n> PostgreSQL ?\n> I've not understood at all why it's OK.\n\n IMHO a value floating around should be kept NUMERIC or in\n it's string representation until it's finally clear where it\n is dropped (int2/4/8, float4/8, numeric or return to client).\n\n This surely has an impact on performance, but from my PoV\n beeing correct has a higher priority. If you want\n performance, buy well sized hardware depending on application\n and workload. If you want reliability, choose the right\n software.\n\n Don't force it, use a bigger hammer!\n\n\nJan\n\n BTW: I still intend to redo the NUMERIC type somewhere in the\n future. Just haven't found the time though.\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 17 Feb 2000 04:02:14 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> I concur --- I'd be inclined to leave FLOAT8 as the top of the\n>> hierarchy. But NUMERIC could be stuck in there between int and float,\n>> no? (int-vs-numeric ops certainly must be promoted to numeric...)\n\n> Is this topic related to the fact that 1.1 is an FLOAT8 constant in\n> PostgreSQL ?\n\nNo, not directly. At least I don't think the question of how constants\nare handled forces our decision about which direction the default\npromotion should go.\n\n\n> I've not understood at all why it's OK.\n\nThere's a really, really crude hack in scan.l that prevents a long\nnumeric constant from being converted to FLOAT8. Otherwise we'd lose\nprecision from making the value float8 and later converting it to\nnumeric (after type analysis had discovered the necessity for it to\nbe numeric). I think this is pretty ugly, not to say inconsistent,\nsince the parser's behavior can change depending on how many digits\nyou type:\n\nregression=# select * from num_data where val = 12345678901234.56;\nERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n You will have to retype this query using an explicit cast\nregression=# select * from num_data where val = 12345678901234.567;\n id | val\n----+-----\n(0 rows)\n\nThe second case works because it's treated exactly like\n\tselect * from num_data where val = '12345678901234.567';\nand here, the resolution of an UNKNOWN-type string constant saves\nthe day.\n\nI proposed a while back that T_Float tokens ought to carry the value in\nstring form, rather than actually converting it to float, so that we\nbehave consistently while taking no precision risks until the target\ntype is known for certain. Thomas seems not to want to do it that way,\nfor some reason.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 22:23:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "> I proposed a while back that T_Float tokens ought to carry the value in\n> string form, rather than actually converting it to float, so that we\n> behave consistently while taking no precision risks until the target\n> type is known for certain. Thomas seems not to want to do it that way,\n> for some reason.\n\nHmm. We should then carry *all* numeric types as strings farther into\nthe backend, probably deeper than gram.y? Some of the input validation\nhappens as early as gram.y now, so I guess we would need to do some\nconversion at that point for some contexts, and leave the numeric\nstuff as a string in other contexts. No fair only doing it for float8;\nint4 has the same trouble.\n\nJust seems like a can of worms, but it is definitely (?) the right\nsolution since at the moment the early interpretation of numerics can\nlead to loss of info or precision deeper in the code.\n\nThis could be a minor-release kind of improvement...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 07:14:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I proposed a while back that T_Float tokens ought to carry the value in\n>> string form, rather than actually converting it to float,\n\n> No fair only doing it for float8; int4 has the same trouble.\n\nAu contraire: int representation has no risk of loss of precision.\nIt does risk overflow, but we can detect that reliably, and in fact\nscan.l already takes care of that scenario.\n\nIf we allow ints to retain their current representation, then the\nmanipulations currently done in gram.y don't need to change. All\nthat's needed is to invoke the proper typinput function after we've\ndecided what type we really want to convert a T_Float to. T_Float\nwould act kind of like UNKNOWN-type string constants, except that\nthe knowledge that the string looks numeric-ish could be used in\ntype selection heuristics.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 02:38:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "> > No fair only doing it for float8; int4 has the same trouble.\n> Au contraire: int representation has no risk of loss of precision.\n> It does risk overflow, but we can detect that reliably, and in fact\n> scan.l already takes care of that scenario.\n\nRight, but why bother doing it there and then having to propagate the\n\"int4 or string\" code into the backend? Right now, we mark it as an\nstring constant of unknown characteristics if it is too large for an\nint4, but that isn't the right thing for long numerics since we are\nthrowing away valuable info. And using the scan.l heuristic to filter\nout large values for things like OIDs is probably cheating a bit ;)\n\n> If we allow ints to retain their current representation, then the\n> manipulations currently done in gram.y don't need to change. All\n> that's needed is to invoke the proper typinput function after we've\n> decided what type we really want to convert a T_Float to. T_Float\n> would act kind of like UNKNOWN-type string constants, except that\n> the knowledge that the string looks numeric-ish could be used in\n> type selection heuristics.\n\nSo a replacement for T_Float would carry the \"long string with decimal\npoint\" info, and a replacement for T_Integer would carry the \"long\nstring with digits only\" info. And we would continue to use T_Float\nand T_Integer deeper in the backend to carry converted values.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 14:51:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
},
{
"msg_contents": "On 2000-02-17, Jan Wieck mentioned:\n\n> IMHO a value floating around should be kept NUMERIC or in\n> it's string representation until it's finally clear where it\n> is dropped (int2/4/8, float4/8, numeric or return to client).\n\nActually, the hierarchy float8, float4, numeric, int8, int4, int2 might\njust be right. The standard specifies that float<x> + numeric = float<y>\n(where perhaps x == y, not sure). On the other hand, it is also quite\nclear that a constant of the form 123.45 is a numeric literal.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 19 Feb 2000 15:12:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to identify an operator '=' for types\n\t'numeric' and 'float8'"
}
] |
[
{
"msg_contents": "Silly question,\n\nI've got nice ThinkPad 390E with 128Mb RAM and install Linux+Postgres\n6.5.3. Everything work like a charm except strange behaivour when\nnotebook wake up after suspend mode. I noticed\na lot of [postmaster] in ps output. Is it normal ?\nUsually I see like now:\n 4513 ? S 0:00 /usr/local/pgsql/bin/postmaster -i -B 1024 -N 32 -S -\n 4941 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discove\n 4943 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd polit_d\n 4944 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd voting \n 4945 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discove\n 4946 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd polit_d\n 4947 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd voting \n 4948 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd discove\n 4949 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd polit_d\n 4950 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd voting \n\nI have apache+modperl running with persistent connection with postgres using\nDBI/Apache::DBI. After wake up, all these stuff seems work ok,\nbut probably [postmaster] processes doesn't works, at least I've seen\nnew postgres processes with much more PID's appearing (Apache::DBI should\ntake care about that). I didn't take much attention yet and just asking\nif there are something special with Postgres interaction with\nLinux running on notebook (apm stuff)\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 16 Feb 2000 13:36:50 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres on notebook"
},
{
"msg_contents": "> I have apache+modperl running with persistent connection with postgres using\n> DBI/Apache::DBI. After wake up, all these stuff seems work ok,\n> but probably [postmaster] processes doesn't works, at least I've seen\n> new postgres processes with much more PID's appearing (Apache::DBI should\n> take care about that). I didn't take much attention yet and just asking\n> if there are something special with Postgres interaction with\n> Linux running on notebook (apm stuff)\n\nThere should be none afaik. But I'll try testing on my laptop sometime\nsoon (not running apache, but I can leave a backend connected to a\npsql session and see that it responds after waking up).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 16 Feb 2000 14:13:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] postgres on notebook"
}
] |
[
{
"msg_contents": "\n\n In TODO is:\n\t\n CACHE:\n * Cache most recent query plan(s) [prepare]\n\n !--> I'm working on this. \n\n\n TODO.detail (Jan's idea): \n\n I can think of the following construct:\n\n PREPARE optimizable-statement;\n\n That one will run parser/rewrite/planner, create a new memory\n context with a unique identifier and saves the querytree's\n and plan's in it. Parameter values are identified by the\n usual $n notation. The command returns the identifier.\n\n EXECUTE QUERY identifier [value [, ...]];\n\n then get's back the prepared plan and querytree by the id,\n creates an executor context with the given values in the\n parameter array and calls ExecutorRun() for them.\n\n .... etc (cut).\n \n\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Wed, 16 Feb 2000 13:13:19 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "TODO: Cache most recent query plan"
},
{
"msg_contents": "Karel Zak - Zakkr wrote:\n> \n> In TODO is:\n> \n> CACHE:\n> * Cache most recent query plan(s) [prepare]\n\nI havn't been following what this is about, but\nany implementation of caching query plans should\nbe careful about pg_class.relhasindex and \npg_class.relhassubclass, otherwise reuse of\nquery plans could give incorrect results, _maybe_,\ndepending on what you are planning here.\n\n-- \nChris Bitmead\nmailto:[email protected]\n",
"msg_date": "Thu, 17 Feb 2000 00:33:08 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO: Cache most recent query plan"
},
{
"msg_contents": "Chris <[email protected]> writes:\n>> * Cache most recent query plan(s) [prepare]\n\n> I havn't been following what this is about, but\n> any implementation of caching query plans should\n> be careful about pg_class.relhasindex and \n> pg_class.relhassubclass, otherwise reuse of\n> query plans could give incorrect results, _maybe_,\n> depending on what you are planning here.\n\nWell, of course the cached plan would only be good as long as you\nweren't changing the database schema underneath it. I'm not sure\nhow far the system ought to go to prevent the user from continuing\nto use a no-longer-valid plan ... exact detection of trouble seems\nimpractical, but I'm not thrilled with a \"let the programmer beware\"\napproach either.\n\nAlso, assuming that we do have some trouble detection mechanism, should\nwe reject subsequent attempts to use the cached plan, or automatically\nre-do the plan on next use? If we kept around source or querytree form\nof the original query, it ought to be possible to re-make the plan.\nThis would let us adopt a fairly simple trouble-detection mechanism that\nwould err in the direction of re-planning too much; say just replan on\nany relcache flush for the relevant tables & indices. (If we're going\nto raise an error, that test would be much too prone to raise errors\nunnecessarily.)\n\nThis seems closely related to Jan's TODO item about recompiling rules\nwhen the DB schema changes, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 10:19:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO: Cache most recent query plan "
},
{
"msg_contents": "\nOn Wed, 16 Feb 2000, Tom Lane wrote:\n\n> Chris <[email protected]> writes:\n> >> * Cache most recent query plan(s) [prepare]\n> \n> > I havn't been following what this is about, but\n> > any implementation of caching query plans should\n> > be careful about pg_class.relhasindex and \n> > pg_class.relhassubclass, otherwise reuse of\n> > query plans could give incorrect results, _maybe_,\n> > depending on what you are planning here.\n\nNow, I have implemented parser part for PREPARE:\n\n \"PREPARE queryname AS SELECT * FROM aaa WHERE b = $1 WITH TYPE int4\"\n\nthis allow use $1..$n values and set types of these values. (Yes, I not\nsure if all keywords are right, but change it is easy..)\n\nThe PREPARE is CMD_UTILITY and plan for prepared query is create in \ncommand/prepare.c (it is easy and not needs changes in standard \"the \npath of query\". \n\nHmm, how cache it, it is a good question. If I good understand Jan's TODO \nitem, we not have (for PREPARE) plan-cache as across transaction/start-stop\npersisten plans (example cache it to any relation).\n\n> Well, of course the cached plan would only be good as long as you\n> weren't changing the database schema underneath it. I'm not sure\n\n My idea (in current time) is write PREPARE as simple, no-longer-valid, *user \ncontrollable* cache, user problem is if he changes his tables (?). \n\nAnd about plan cache implementation; I want use hash table (hash_create ..etc)\nsystem and as hash key use 'queryname'. I not sure how memory-context\nuse for this cache (or create new portal..?) I see Jan's FK implementation,\nhe uses SPI memory context - it not bad. Comments, ideas?\n\n> how far the system ought to go to prevent the user from continuing\n> to use a no-longer-valid plan ... exact detection of trouble seems\n> impractical, but I'm not thrilled with a \"let the programmer beware\"\n> approach either.\n\n And what if user has PREPAREd any plans and he changes DB schema drop\nall prepared plans. (You change DB schema..well, your caches with PREPAREd\nplans go to .... /dev/null).\n\nOr re-do the plan as you say. \n\n> Also, assuming that we do have some trouble detection mechanism, should\n> we reject subsequent attempts to use the cached plan, or automatically\n> re-do the plan on next use? If we kept around source or querytree form\n> of the original query, it ought to be possible to re-make the plan.\n> This would let us adopt a fairly simple trouble-detection mechanism that\n> would err in the direction of re-planning too much; say just replan on\n> any relcache flush for the relevant tables & indices. (If we're going\n> to raise an error, that test would be much too prone to raise errors\n> unnecessarily.)\n> \n> This seems closely related to Jan's TODO item about recompiling rules\n> when the DB schema changes, too.\n> \n> \t\t\tregards, tom lane\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Fri, 18 Feb 2000 11:26:16 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO: Cache most recent query plan "
}
] |
[
{
"msg_contents": "Hi,\n\nI just committed a patch Christof Petig send me to add DESCRIPTORS to ecpg.\nPlease test this patch. From the first look it needs some cleanup as does\nthe rest of ecpg. But other than that it seems to work fine. I will try to\nclean up the sources some but wanted to get this one out before we go beta.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Wed, 16 Feb 2000 17:16:34 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "DESCRIPTORS"
}
] |
[
{
"msg_contents": "\n\n>You had inquired earlier about \"when we would support complete SQL92\"\n(give or take a few words). What areas of entry level SQL92 are we\nmissing in your opinion (or should we wait for the article)?\n\nWell, what I look for on the language side is complete SQL-92 entry level\ncompliance, plus common language extensions like outer joins, cast, case,\ncube, rollup, a datetime data type, add table constraint and alter table.\nAlso, I look for a stored procedure language. Basically, parity with the\ncommercial databases. :)\n\nThe key measure I'd look for with SQL compliance is passing the NIST FIPS\n127 SQL92 test. NIST discontinued its testing policy, which was a bad thing\nfor the industry, but the test may still be available from NIST. The spec\nitself still is available for free; I ordered a copy a few weeks ago.\n\n-Tim Dyck\n\n\n",
"msg_date": "Wed, 16 Feb 2000 15:11:16 -0500",
"msg_from": "Timothy Dyck <[email protected]>",
"msg_from_op": true,
"msg_subject": "re: SQL compliance, was Re: [HACKERS] follow-up on PC Week Labs\n\tbenchmark results"
}
] |
[
{
"msg_contents": "\n\n\nFYI, the story is available online at:\n\nhttp://www.zdnet.com/pcweek/stories/news/0,4153,2436153,00.html\n\nAs I think I have mentioned, I would like to review PostgreSQL 7.0 when it\ngoes gold, so if someone could let me know when it is available, that\nwould be much appreciated.\n\nRegards,\nTim Dyck\nSenior Analyst\nPC Week Labs\n\n\n",
"msg_date": "Wed, 16 Feb 2000 15:25:15 -0500",
"msg_from": "Timothy Dyck <[email protected]>",
"msg_from_op": true,
"msg_subject": "PC Week PostgreSQL benchmark results posted online"
}
] |
[
{
"msg_contents": "Doing a Google search for SQL standards, I found a \nwonderful page of links to SQL info, including the BNF \nspecs for SQL92 and SQL3. Not exactly gram.y but hopefully \nclose. \n\nCould be helpful in deciphering what the standards say.\n\nSee: http://www.contrib.andrew.cmu.edu/~shadow/sql.html\n\n---------\nHannu\n",
"msg_date": "Thu, 17 Feb 2000 00:12:42 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "FYI: BNF for SQL93 and SQL-3"
},
{
"msg_contents": "> Doing a Google search for SQL standards, I found a \n> wonderful page of links to SQL info, including the BNF \n> specs for SQL92 and SQL3. Not exactly gram.y but hopefully \n> close. \n> \n> Could be helpful in deciphering what the standards say.\n> \n> See: http://www.contrib.andrew.cmu.edu/~shadow/sql.html\n\nThis is terrific. We need this on our web page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 17:29:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FYI: BNF for SQL93 and SQL-3"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Doing a Google search for SQL standards, I found a\n> wonderful page of links to SQL info, including the BNF\n> specs for SQL92 and SQL3. Not exactly gram.y but hopefully\n> close.\n\nHow official is this? I get the feeling this might\nbe one guy's interpretation because it seems like\nit is missing stuff.\n",
"msg_date": "Thu, 17 Feb 2000 13:34:48 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FYI: BNF for SQL93 and SQL-3"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Doing a Google search for SQL standards, I found a\n> > wonderful page of links to SQL info, including the BNF\n> > specs for SQL92 and SQL3. Not exactly gram.y but hopefully\n> > close.\n> \n> How official is this? I get the feeling this might\n> be one guy's interpretation because it seems like\n> it is missing stuff.\n>\n\nI guess it is lifted from the standard as it was in september 1993.\n\nOTOH, the SQL3 standard is still not final.\n\n-----------\nHannu\n",
"msg_date": "Thu, 17 Feb 2000 11:30:20 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] FYI: BNF for SQL93 and SQL-3"
},
{
"msg_contents": "\n> I guess it is lifted from the standard as it was in september 1993.\n> \n> OTOH, the SQL3 standard is still not final.\n\nIs it progressing, or is it in disrepair?\n",
"msg_date": "Fri, 18 Feb 2000 10:38:42 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] FYI: BNF for SQL93 and SQL-3"
}
] |
[
{
"msg_contents": "I sent this report to Constantin Teodorescu, as author of pgaccess,\nbut it now occurs to me that it is probably something to be handled\nin libpgtcl instead.\n\n---------------------------------------------------------------------\nI have had a bug-report on pgaccess (from PostgreSQL 6.5.3) when used\nwith multi-byte encoding and with Tcl 8.2.\n\nYour website says that Tcl/Tk 8.0 or higher is needed.\nHowever, there are problems with multibyte-encoding with Tcl8.2 and \nI don't see any reference to this in \"What's New\".\n\nAt 8.1, Tcl introduced internationalisation. Everything is reduced\nto Unicode internally. If someone uses pgaccess with (say) KOI-8 as\nthe default encoding for the whole machine, and with a database that\nuses KOI-8 encoding, Tcl translates KOI-8 user input to Unicode before\nusing the data. It then sends the data to the backend in UNICODE, but\ndoes not tell the backend that this is what is happening. \n\nThe user may have PGCLIENTENCODING set to KOI8, or may be depending on\nhis default environment, but Tcl is doing its own thing.\n\nI think that pgaccess needs to translate back to the native encoding\nbefore it sends data to the backend; it also needs to translate data\nfrom the backend to Unicode before using or displaying it.\n---------------------------------------------------------------------\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But as many as received him, to them gave he power to \n become the sons of God, even to them that believe on \n his name\" John 1:12 \n\n\n",
"msg_date": "Wed, 16 Feb 2000 23:00:42 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess and multibyte-enabled libpq"
}
] |
[
{
"msg_contents": "I also found a comparison of current (as of may-June 98 ;) OO support \nfor the big guys\n\nhttp://galaxy.uci.agh.edu.pl/~vahe/orcl_inf.htm\n\nMost links to www.oracle.com at the bottom are to pages saying :\n\"An application error has occured. Please try again.\" ;-p\n\n-------------\nHannu\n",
"msg_date": "Thu, 17 Feb 2000 01:24:24 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "FYI: OO features of Oracle 8 and Informix"
}
] |
[
{
"msg_contents": "Can anybody comment on this?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n---------- Forwarded message ----------\nDate: Wed, 16 Feb 2000 11:16:48 -0600\nFrom: Jude Weaver <[email protected]>\nTo: Peter Eisentraut <[email protected]>\nSubject: Maximum columns for optimum performance\n\nOne of our tables have 476 columns and only 12 records or rows in it. We are\ncoding in Java. When we compile and run against this table it is super slow.\n\nIs there a maximum number of columns past which performance suffers? Would we\nbe better off building several smaller tables with fewer columns instead of\none big table? Does the number of columns in a table affect speed?\n\n\n\n\n",
"msg_date": "Thu, 17 Feb 2000 00:27:26 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Maximum columns for optimum performance (fwd)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> forwards:\n> One of our tables have 476 columns and only 12 records or rows in it. We are\n> coding in Java. When we compile and run against this table it is super slow.\n> Is there a maximum number of columns past which performance suffers?\n\nWhat exactly are you finding to be super slow? It's hard to tell from\nthis report whether the performance problem is in the backend or the\nJava client interface (or even in your application code...)\n\nWhile I can think of places that have loops over columns, I wouldn't\nhave thought that any of them are remarkably time-critical. It\nprobably depends on just what sort of query you are doing ... so a\nspecific example of a slow query would be helpful, along with the\ndetails of the table declaration.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 20:38:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Maximum columns for optimum performance (fwd) "
}
] |
[
{
"msg_contents": "Peter E. can you look at this? I see simple prompt doing:\n\n fputs(prompt, stdout);\n\nwhich I think should be stderr. Peter, can you check on those?\n\n\n> \n> Does it still make sense to send the following to\n> [email protected]?\n> \n> - Scott Williams\n> \n> \n> Subject: pg_dump -u prompts for username/password on stdout rather than stderr\n> ----------------------------------------------------------------------\n> Your name\t\t:\tScott Williams\n> Your email address\t:\[email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: i586\n> Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.2.10\n> PostgreSQL version (example: PostgreSQL-6.5.3): PostgreSQL-6.5.3 \n> Compiler used (example: gcc 2.8.0)\t\t: egcs-2.91.66 19990314 (egcs-1.1.2 release)\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> When running pg_dump with the -u switch, it prompts me on stdout for\n> the username and password, rather than stderr. Unfortunately this\n> causes a bit of confusion when doing\n> \n> pg_dump -o -u some_db >dump_file\n> \n> since it puts the prompt in the dump_file rather than to the screen,\n> making dump_file syntacticly invalid. Of course the work around is\n> to use the `-f' flag. However, redirecting pg_dump's stdout to a\n> file using `>' is what's shown under `Usage' in the pg_dump chapter\n> of the `PostgreSQL User's Guide'\n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> ----------------------------------------------------------------------\n> \n> If you know how this problem might be fixed, list the solution below:\n> ---------------------------------------------------------------------\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 18:36:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql password prompt"
},
{
"msg_contents": "\nI was noticing that psql now exits on ctrl-C. This is much\nbetter than the previous behaviour where it kinda got\nmuddled up and you could destroy your database if\na half-completed command was in its buffer.\n\nBut wouldn't it be better if it was like /bin/sh and\npopped you back to a fresh prompt or something?\n",
"msg_date": "Thu, 17 Feb 2000 13:39:10 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "psql problem"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> I was noticing that psql now exits on ctrl-C.\n\nUgh. So now, if you type control-C while a query is in progress,\nyou get a cancel request sent, as you intended. Type it a tenth of\na second too late, however, and you get booted out of psql instead.\n\nI think this is lousy human engineering, even though I'm sure Peter\nthought it was a good idea at the time. If we trap control-C we\nshould trap it all the time, not create a delay-dependent behavior.\n\n> This is much better than the previous behaviour where it kinda got\n> muddled up and you could destroy your database if a half-completed\n> command was in its buffer.\n\nWhat? Are you saying that control-C doesn't do a \\r (reset the\nquery buffer)? That's probably true, and I agree that it should...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 22:30:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem "
},
{
"msg_contents": "> \n> I was noticing that psql now exits on ctrl-C. This is much\n> better than the previous behaviour where it kinda got\n> muddled up and you could destroy your database if\n> a half-completed command was in its buffer.\n\nYes, ^C exits if you are at a prompt, and terminates the current query\nif you are running one. Very nice.\n\n> But wouldn't it be better if it was like /bin/sh and\n> popped you back to a fresh prompt or something?\n\nYou mean reprinted the prompt?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 22:34:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql problem]"
},
{
"msg_contents": "> Chris Bitmead <[email protected]> writes:\n> > I was noticing that psql now exits on ctrl-C.\n> \n> Ugh. So now, if you type control-C while a query is in progress,\n> you get a cancel request sent, as you intended. Type it a tenth of\n> a second too late, however, and you get booted out of psql instead.\n> \n> I think this is lousy human engineering, even though I'm sure Peter\n> thought it was a good idea at the time. If we trap control-C we\n> should trap it all the time, not create a delay-dependent behavior.\n\nYes, I figured that would be an issue. Not sure if I like it or not. \nOf course, ^D exits you if you are not in a query.\n\n> \n> > This is much better than the previous behaviour where it kinda got\n> > muddled up and you could destroy your database if a half-completed\n> > command was in its buffer.\n> \n> What? Are you saying that control-C doesn't do a \\r (reset the\n> query buffer)? That's probably true, and I agree that it should...\n\n\nLooks like it works fine:\n\n\ttest=> select * from pg_class, pg_proc;\n\t^C\n\tCancel request sent\n\tERROR: Query was cancelled.\n\ttest=> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 16 Feb 2000 22:36:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql problem"
},
{
"msg_contents": ">> What? Are you saying that control-C doesn't do a \\r (reset the\n>> query buffer)? That's probably true, and I agree that it should...\n\n> Looks like it works fine:\n\n> \ttest=> select * from pg_class, pg_proc;\n> \t^C\n> \tCancel request sent\n> \tERROR: Query was cancelled.\n> \ttest=> \n\nNo, I think Chris was complaining about the behavior with an\nincomplete query in the buffer. I can't show it with current\nsources since psql is exiting on ^C, but 6.5 works like this:\n\n\nplay=> foobar\nplay-> ^C\nCANCEL request sent\n <-- return typed here to get a prompt\nplay-> select 2+2;\nERROR: parser: parse error at or near \"foobar\"\nplay=>\n\n\nNotice the prompt correctly shows that I still have stuff in the\nquery buffer after ^C. I think Chris is saying that ^C should\nflush the buffer like \\r does ... and I agree.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Feb 2000 23:02:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem "
},
{
"msg_contents": "\nThere are two issues. One is what happens when there is something\nin the buffer and I hit ^C. Intuitively I think I should get back to \na \"sane\" state ready for a new query. I mean I start off typing\na long query, then I change my mind I want ^C to get me a\nclear prompt.\n\nWhat used to happen is it says \"CANCEL request sent\". Then I\nthink \"ok I'll put in a different query\". but that doesn't work.\nMaybe \\r was the correct command but I never took the time\nto learn that.\n\nThe other issue is what happens if I'm just at a prompt, I don't\nthink it should exit on ^C. Basicly this is because I'm familiar\nwith the way /bin/sh works and I wish psql had the same semantics.\n\nTom Lane wrote:\n> \n> >> What? Are you saying that control-C doesn't do a \\r (reset the\n> >> query buffer)? That's probably true, and I agree that it should...\n> \n> > Looks like it works fine:\n> \n> > test=> select * from pg_class, pg_proc;\n> > ^C\n> > Cancel request sent\n> > ERROR: Query was cancelled.\n> > test=>\n> \n> No, I think Chris was complaining about the behavior with an\n> incomplete query in the buffer. I can't show it with current\n> sources since psql is exiting on ^C, but 6.5 works like this:\n> \n> play=> foobar\n> play-> ^C\n> CANCEL request sent\n> <-- return typed here to get a prompt\n> play-> select 2+2;\n> ERROR: parser: parse error at or near \"foobar\"\n> play=>\n> \n> Notice the prompt correctly shows that I still have stuff in the\n> query buffer after ^C. I think Chris is saying that ^C should\n> flush the buffer like \\r does ... and I agree.\n> \n> regards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 15:44:07 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem"
},
{
"msg_contents": "On Wed, 16 Feb 2000, Tom Lane wrote:\n\n> >> What? Are you saying that control-C doesn't do a \\r (reset the\n> >> query buffer)? That's probably true, and I agree that it should...\n> \n> > Looks like it works fine:\n> \n> > \ttest=> select * from pg_class, pg_proc;\n> > \t^C\n> > \tCancel request sent\n> > \tERROR: Query was cancelled.\n> > \ttest=> \n> \n> No, I think Chris was complaining about the behavior with an\n> incomplete query in the buffer. I can't show it with current\n> sources since psql is exiting on ^C, but 6.5 works like this:\n\nActually from reading Chris' note, he said that it was 'better than \nprevious behaviour'. Note the key word was *previous*.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 17 Feb 2000 05:56:54 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem "
},
{
"msg_contents": "On Thu, 17 Feb 2000, Chris Bitmead wrote:\n\n> I was noticing that psql now exits on ctrl-C.\n\n> But wouldn't it be better if it was like /bin/sh and\n> popped you back to a fresh prompt or something?\n\nI have, in fact, been muddling with this. If demand is there (apparently),\nI can speed up the muddling.\n\nFor all of those that pretend to never have heard of this behaviour, I\nexplicitly announced it and even got several positive responses.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 12:17:09 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem"
},
{
"msg_contents": "On Wed, 16 Feb 2000, Tom Lane wrote:\n\n> No, I think Chris was complaining about the behavior with an\n> incomplete query in the buffer. I can't show it with current\n> sources since psql is exiting on ^C, but 6.5 works like this:\n> \n> play=> foobar\n> play-> ^C\n> CANCEL request sent\n> <-- return typed here to get a prompt\n\nActually, I have an idea why that is, too. The signal handler should tell\nreadline that input is done. At the time you press return there, it's\nstill reading input.\n\n> play-> select 2+2;\n> ERROR: parser: parse error at or near \"foobar\"\n> play=>\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 12:21:03 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql problem "
},
{
"msg_contents": "On Wed, 16 Feb 2000, Bruce Momjian wrote:\n\n> Peter E. can you look at this? I see simple prompt doing:\n> \n> fputs(prompt, stdout);\n> \n> which I think should be stderr. Peter, can you check on those?\n\nWill be fixed.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 12:25:16 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql password prompt"
}
] |
[
{
"msg_contents": "Hi\n\nI have a crash while creating regression database in pararell regression\ntest.\nSeems it's due to the following change.\n\n@@ -2638,7 +2705,14 @@\n \t\t\t\t\tn->dbname = $3;\n \t\t\t\t\tn->dbpath = $5;\n #ifdef MULTIBYTE\n- n->encoding = $6;\n+\t\t\t\t\tif ($6 != NULL) {\n+\t\t\t\t\t\tn->encoding = pg_char_to_encoding($6);\n+\t\t\t\t\t\tif (n->encoding < 0) {\n+\t\t\t\t\t\t\telog(ERROR, \"Encoding name '%s' is invalid\", $6);\n+\t\t\t\t\t\t}\n+\t\t\t\t\t} else {\n+\t\t\t\t\t\tn->encoding = GetTemplateEncoding();\n+\t\t\t\t\t}\n #else\n \t\t\t\t\tn->encoding = 0;\n #endif\n\nWhy ?\n$6 is an ival not an str.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 17 Feb 2000 09:42:24 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "> I have a crash while creating regression database in pararell regression\n> test.\n> Seems it's due to the following change.\n> \n> @@ -2638,7 +2705,14 @@\n> \t\t\t\t\tn->dbname = $3;\n> \t\t\t\t\tn->dbpath = $5;\n> #ifdef MULTIBYTE\n> - n->encoding = $6;\n> +\t\t\t\t\tif ($6 != NULL) {\n> +\t\t\t\t\t\tn->encoding = pg_char_to_encoding($6);\n> +\t\t\t\t\t\tif (n->encoding < 0) {\n> +\t\t\t\t\t\t\telog(ERROR, \"Encoding name '%s' is invalid\", $6);\n> +\t\t\t\t\t\t}\n> +\t\t\t\t\t} else {\n> +\t\t\t\t\t\tn->encoding = GetTemplateEncoding();\n> +\t\t\t\t\t}\n> #else\n> \t\t\t\t\tn->encoding = 0;\n> #endif\n> \n> Why ?\n> $6 is an ival not an str.\n\nNot sure what to recommend here, but you clearly have found a problem. \nTry it as a string, and if that works, patch it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Feb 2000 00:40:32 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Thursday, February 17, 2000 2:41 PM\n>\n> > I have a crash while creating regression database in pararell regression\n> > test.\n> > Seems it's due to the following change.\n> >\n> > @@ -2638,7 +2705,14 @@\n> > \t\t\t\t\tn->dbname = $3;\n> > \t\t\t\t\tn->dbpath = $5;\n> > #ifdef MULTIBYTE\n> > - n->encoding = $6;\n> > +\t\t\t\t\tif ($6 != NULL) {\n> > +\t\t\t\t\t\tn->encoding =\n> pg_char_to_encoding($6);\n> > +\t\t\t\t\t\tif (n->encoding < 0) {\n> > +\t\t\t\t\t\t\telog(ERROR,\n> \"Encoding name '%s' is invalid\", $6);\n> > +\t\t\t\t\t\t}\n> > +\t\t\t\t\t} else {\n> > +\t\t\t\t\t\tn->encoding =\n> GetTemplateEncoding();\n> > +\t\t\t\t\t}\n> > #else\n> > \t\t\t\t\tn->encoding = 0;\n> > #endif\n> >\n> > Why ?\n> > $6 is an ival not an str.\n>\n> Not sure what to recommend here, but you clearly have found a problem.\n> Try it as a string, and if that works, patch it.\n>\n\n$6 is already converted from string to ival in another place.\nIt seems to me that this change is unnecessary.\nI don't understand why this was changed recently.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 17 Feb 2000 15:04:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "Good point. Thomas? ;)\n\nAs the one who wrote the seemingly correct code, I say revert this part.\n\n\nOn Thu, 17 Feb 2000, Hiroshi Inoue wrote:\n\n> I have a crash while creating regression database in pararell\n> regression test. Seems it's due to the following change.\n> \n> @@ -2638,7 +2705,14 @@\n> \t\t\t\t\tn->dbname = $3;\n> \t\t\t\t\tn->dbpath = $5;\n> #ifdef MULTIBYTE\n> - n->encoding = $6;\n> +\t\t\t\t\tif ($6 != NULL) {\n> +\t\t\t\t\t\tn->encoding = pg_char_to_encoding($6);\n> +\t\t\t\t\t\tif (n->encoding < 0) {\n> +\t\t\t\t\t\t\telog(ERROR, \"Encoding name '%s' is invalid\", $6);\n> +\t\t\t\t\t\t}\n> +\t\t\t\t\t} else {\n> +\t\t\t\t\t\tn->encoding = GetTemplateEncoding();\n> +\t\t\t\t\t}\n> #else\n> \t\t\t\t\tn->encoding = 0;\n> #endif\n> \n> Why ?\n> $6 is an ival not an str.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 12:48:57 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "> Good point. Thomas? ;)\n> As the one who wrote the seemingly correct code, I say revert this part.\n\nOK, so just a simple assignment to $6 is what is needed? I vaguely\nremember a merging problem here, and obviously chose the wrong block\nof code to retain. Amazing that the desired code is actually simpler\n:)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 14:43:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "> $6 is already converted from string to ival in another place.\n> It seems to me that this change is unnecessary.\n> I don't understand why this was changed recently.\n\nAt the moment, if the code is compiled without MULTIBYTE enabled, it\nwill silently ignore any \"ENCODING=\" clause in a CREATE DATABASE\nstatement.\n\nWouldn't it be more appropriate to throw an elog(ERROR) in this case\n(or perhaps an elog(WARN))? I've got the code ready to add in.\nComments?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 15:11:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "On 2000-02-17, Thomas Lockhart mentioned:\n\n> At the moment, if the code is compiled without MULTIBYTE enabled, it\n> will silently ignore any \"ENCODING=\" clause in a CREATE DATABASE\n> statement.\n\nHuh?\n\ntemplate1=# create database foo with encoding='LATIN1';\nERROR: Multi-byte support is not enabled\n\nI believe that you have missed that a fair amount of work is being done in\nthe create_opt_encoding rule. Take a look.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Fri, 18 Feb 2000 01:20:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "> > At the moment, if the code is compiled without MULTIBYTE enabled, it\n> > will silently ignore any \"ENCODING=\" clause in a CREATE DATABASE\n> > statement.\n> template1=# create database foo with encoding='LATIN1';\n> ERROR: Multi-byte support is not enabled\n> I believe that you have missed that a fair amount of work is being done in\n> the create_opt_encoding rule. Take a look.\n\nAh, thanks. I was misled (why try actually testing it? I was reading\nthe source ;) by some crufty code above createdb_opt_encoding (some\ntabs removed for readability):\n\n...\n#ifdef MULTIBYTE\n\tn->encoding = $6;\n#else\n\tn->encoding = 0;\n#endif\n...\n\nwhere in fact if MULTIBYTE is not enabled and $6 is non-empty, the $6\nproduction never returns! I'm planning on fixing this up (yacc\nwilling) to *not* do anything special when MULTIBYTE is on or off, but\nwill instead embed all of this funny business down in\ncreatedb_opt_encoding with the other stuff already there.\n\nSo, why does the createdb_opt_encoding ($6 above) bother trying to\nreturn \"-1\" when MULTIBYTE is disabled, when that -1 is ignored and\nreplaced with a zero anyway? Is it acceptable to return -1, as the $6\nproduction does, or should it really be returning the zero which is\npassed along??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 18 Feb 2000 04:49:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
},
{
"msg_contents": "On Fri, 18 Feb 2000, Thomas Lockhart wrote:\n\n> ...\n> #ifdef MULTIBYTE\n> \tn->encoding = $6;\n> #else\n> \tn->encoding = 0;\n> #endif\n> ...\n> \n> where in fact if MULTIBYTE is not enabled and $6 is non-empty, the $6\n> production never returns!\n\nIt will if you write ENCODING = DEFAULT.\n\nAlso, the rule you're looking at also covers the case CREATE DATABASE WITH\nLOCATION (no ENCODING clause given). In that case, with MULTIBYTE on, $6\nwill be set to GetTemplateEncoding() in the create_opt_encoding: /*EMPTY*/\nrule. With MULTIBYTE off, you must set encoding to 0, because that's the\ndefault SQL_ASCII encoding, and the createdb() function (where this all\nends up), does no further interpretation on the encoding at all. Either\nway you read it, I still think the previous code is completely correct as\nit stands.\n\n> So, why does the createdb_opt_encoding ($6 above) bother trying to\n> return \"-1\" when MULTIBYTE is disabled, when that -1 is ignored and\n> replaced with a zero anyway? Is it acceptable to return -1, as the $6\n> production does, or should it really be returning the zero which is\n> passed along??\n\nTwo reasons: First, it's better to have some rule than none at all, I\nthunk. Second, if someone mucks with the code and somehow we have a -1\nencoding the database, we know exactly what went wrong. If you feel so\ninclined, you can change it to a zero, but after all the code works\nperfectly.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:47:05 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] create database doesn't work well in MULTIBYTE mode"
}
] |
[
{
"msg_contents": ">> You had inquired earlier about \"when we would support complete SQL92\"\n>> (give or take a few words). What areas of entry level SQL92 are we\n>> missing in your opinion (or should we wait for the article)?\n> Well, what I look for on the language side is complete SQL-92 entry level\n> compliance, plus common language extensions like outer joins, cast, case,\n> cube, rollup, a datetime data type, add table constraint and alter table.\n> Also, I look for a stored procedure language. Basically, parity with the\n> commercial databases. :)\n\nI've since seen the article in the latest issue of PCWeek. The article\nwas not at all clear on the *specific* features which would disqualify\nPostgres from having SQL92 entry level compliance (for most commercial\nRDBMSes this is the only level they attain), and I was amused to note\nthat although InterBase was lauded for SQL92 compliance, the author\ndid encourage them to consider supporting the SQL92 comment delimiter\n(\"--\") in their next release :))\n\nSince InterBase has not been released as Open Source, and since we\nwill have a 7.0 release *before* Inprise does try the Open Source\nthing, it would be nice to have those things happen before annointing\nit as the \"one true Open Source RDBMS\" (tm). But frankly PCWeek has\nbeen far more aggressively clueless in the past, and all in all they\nare coming much closer to a balanced view of the world over the last\nfew months.\n\nIt's nice seeing Postgres mentioned at all (though in this article\nnone of the titles or subtitles mentioned us; you had to look at the\ncontent), and we've still got a long way to go to completely overcome\nthe FUD-based criticisms of Open Source which were more clearly\napparent in PCWeek until very recently.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 17 Feb 2000 06:32:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL compliance,\n\twas Re: [HACKERS] follow-up on PC Week Labsbenchmark results"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> I've since seen the article in the latest issue of PCWeek. The article\n> was not at all clear on the *specific* features which would disqualify\n> Postgres from having SQL92 entry level compliance (for most commercial\n> RDBMSes this is the only level they attain), and I was amused to note\n> that although InterBase was lauded for SQL92 compliance, the author\n> did encourage them to consider supporting the SQL92 comment delimiter\n> (\"--\") in their next release :))\n\nWhy does PostgreSQL _not_ support the -- comment delimiter ?\n\nIs there something complicated to supporting it in parser ?\n\nIMNSHO it would require only a few lines in gram.y\n\nDoes supporting user-defined operators interfere ?\n\nI assume we could comfortably disallow -- as a possible operator (one \ncan't input it from interactive psql anyway)\n\n--------------\nHannu\n",
"msg_date": "Sat, 19 Feb 2000 01:36:38 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL compliance - why -- comments only at psql level ?"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Thomas Lockhart wrote:\n>> ... although InterBase was lauded for SQL92 compliance, the author\n>> did encourage them to consider supporting the SQL92 comment delimiter\n>> (\"--\") in their next release :))\n\n> Why does PostgreSQL _not_ support the -- comment delimiter ?\n\nBetter read it again, Hannu ... wasn't us that was being spoken of ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 19:21:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "I believe it is Interbase that does not support the -- comment.\n\n> Thomas Lockhart wrote:\n> > \n> > I've since seen the article in the latest issue of PCWeek. The article\n> > was not at all clear on the *specific* features which would disqualify\n> > Postgres from having SQL92 entry level compliance (for most commercial\n> > RDBMSes this is the only level they attain), and I was amused to note\n> > that although InterBase was lauded for SQL92 compliance, the author\n> > did encourage them to consider supporting the SQL92 comment delimiter\n> > (\"--\") in their next release :))\n> \n> Why does PostgreSQL _not_ support the -- comment delimiter ?\n> \n> Is there something complicated to supporting it in parser ?\n> \n> IMNSHO it would require only a few lines in gram.y\n> \n> Does supporting user-defined operators interfere ?\n> \n> I assume we could comfortably disallow -- as a possible operator (one \n> can't input it from interactive psql anyway)\n> \n> --------------\n> Hannu\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Feb 2000 19:38:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> > Thomas Lockhart wrote:\n> >> ... although InterBase was lauded for SQL92 compliance, the author\n> >> did encourage them to consider supporting the SQL92 comment delimiter\n> >> (\"--\") in their next release :))\n> \n> > Why does PostgreSQL _not_ support the -- comment delimiter ?\n> \n> Better read it again, Hannu ... wasn't us that was being spoken of ...\n\nI got the impression from the paragraph that followed that we don't\n\nand the first query I tried bounced from commandline\n\n[hannu@hu hannu]$ psql -c \"select count(*) from t1 -- what\"\nERROR: parser: parse error at or near \"-\"\n\nbut worked interactively:\n\n[hannu@hu TeleHansaPlus]$ psql\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.2 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: hannu\n\nhannu=> select count(*) from t1 -- what\nhannu-> ;\ncount\n-----\n 3\n(1 row)\n\nand failed also when used from python\n\n[hannu@hu TeleHansaPlus]$ python\nPython 1.5.2 (#1, Apr 18 1999, 16:03:16) [GCC pgcc-2.91.60 19981201\n(egcs-1.1.1 on linux2\nCopyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam\n>>> import pg\n>>> con=pg.connect('hannu')\n>>> con.query(\"select count(*) from t1 -- what\")\nTraceback (innermost last):\n File \"<stdin>\", line 1, in ?\npg.error: ERROR: parser: parse error at or near \"-\"\n\n\nSo assumed it was handled in psql when in interactive mode.\n\n------------\nHannu\n",
"msg_date": "Sat, 19 Feb 2000 03:36:41 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Hannu Krosing <[email protected]> writes:\n> > > Thomas Lockhart wrote:\n> > >> ... although InterBase was lauded for SQL92 compliance, the author\n> > >> did encourage them to consider supporting the SQL92 comment delimiter\n> > >> (\"--\") in their next release :))\n> > \n> > > Why does PostgreSQL _not_ support the -- comment delimiter ?\n> > \n> > Better read it again, Hannu ... wasn't us that was being spoken of ...\n> \n> I got the impression from the paragraph that followed that we don't\n> \n> and the first query I tried bounced from commandline\n> \n> [hannu@hu hannu]$ psql -c \"select count(*) from t1 -- what\"\n> ERROR: parser: parse error at or near \"-\"\n\nWorked here:\n\n\t$ sql -c \"select count(*) from t1 -- what\" test\n\tERROR: Relation 't1' does not exist\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Feb 2000 20:38:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "> > I got the impression from the paragraph that followed that we don't\n> > and the first query I tried bounced from commandline\n> > So assumed it was handled in psql when in interactive mode.\n> \n> Yuck. They *were* talking about InterBase, but you're right!\n> \n> Didn't realize that scan.l had lost (or never did have) the right\n> stuff. Will be fixed before we're out of beta...\n\nCan you give me a failure condition for the TODO list? I can't see the\nbug here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 18 Feb 2000 21:05:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "> I got the impression from the paragraph that followed that we don't\n> and the first query I tried bounced from commandline\n> So assumed it was handled in psql when in interactive mode.\n\nYuck. They *were* talking about InterBase, but you're right!\n\nDidn't realize that scan.l had lost (or never did have) the right\nstuff. Will be fixed before we're out of beta...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Feb 2000 02:08:42 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "> Can you give me a failure condition for the TODO list? I can't see the\n> bug here.\n\nWell, now that I got off my duff and tried your little test with my\ncurrent sources, I get your result. Hannu??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Feb 2000 02:24:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n level?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Yuck. They *were* talking about InterBase, but you're right!\n\n> Didn't realize that scan.l had lost (or never did have) the right\n> stuff. Will be fixed before we're out of beta...\n\nI've griped about these boundary conditions before, actually ---\nalthough scan.l does the right thing most of the time with comments,\nit has problems if a -- comment is terminated with \\r instead of \\n\n(hence gripes from Windows users), and it also has problems if a --\ncomment is not terminated with \\n before the end of the buffer.\n\nThere are some other cases where \\r is not taken as equivalent\nto \\n, also.\n\nAm testing a fix now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 21:37:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql level\n\t?"
},
{
"msg_contents": "On 2000-02-17, Thomas Lockhart mentioned:\n\n> I've since seen the article in the latest issue of PCWeek. The article\n> was not at all clear on the *specific* features which would disqualify\n> Postgres from having SQL92 entry level compliance\n\nI dug through the standard to come up with a list. I probably missed some\nthings, but they would be more of a lexical nature. I think I covered all\nlanguage constructs (which is what people look at anyway). Some of these\nthings I never used, so I merely tested them by looking at the current\ndocumentation and/or entering a simple example query. Also, this list\ndoesn't care whether an implemented feature contains bugs that would\nactually disqualify it from complete compliance.\n\n\n* TIME and TIMESTAMP WITH TIMEZONE missing [6.1]\n\n* Things such as SELECT MAX(ALL x) FROM y; don't work. [6.5]\n{This seems to be an easy grammar fix.}\n\n* LIKE with ESCAPE clause missing [8.5]\n{Is on TODO.}\n\n* SOME / ANY doesn't seem to exist [8.7]\n\n* Grant privileges have several deficiencies [10.3, 11.36]\n\n* Schemas [11.1, 11.2]\n\n* CREATE VIEW name (x, y, z) doesn't work [11.19]\n\n* There's a WITH CHECK OPTION clause for CREATE VIEW [11.19]\n\n* no OPEN statement [13.2]\n\n* FETCH syntax has a few issues [13.3]\n\n* SELECT x INTO a, b, c table [13.5]\n\n* DELETE WHERE CURRENT OF [13.6]\n\n* INSERT INTO table DEFAULT VALUES [13.8]\n{Looks like a grammar fix as well.}\n\n* UPDATE WHERE CURRENT OF [13.9]\n\n* no SQLSTATE, SQLCODE [22.1, 22.2]\n{Not sure about that one, since the sections don't contain leveling\ninformation.}\n\n* default transaction isolation level is SERIALIZABLE\n{Why isn't ours?}\n\n* no autocommit in SQL\n\n* modules? [12]\n\n* Some type conversion problems. For example a DECIMAL field should not\ndump out as NUMERIC, and a FLOAT(x) field should be stored as such.\n\n[* Haven't looked at Embedded SQL.]\n\n\nThat's it. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 19 Feb 2000 15:12:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL compliance"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> * Things such as SELECT MAX(ALL x) FROM y; don't work. [6.5]\n> {This seems to be an easy grammar fix.}\n\nYes, and since ALL is already a reserved word, it wouldn't break\nanything to accept it. I'll try to take care of that today.\nNone of the other stuff is quite as easy to fix :-(\n\n\n> * INSERT INTO table DEFAULT VALUES [13.8]\n> {Looks like a grammar fix as well.}\n\nHuh? We do have DEFAULT VALUES --- what is wrong exactly?\n\nWhat we don't seem to have is full <table value constructor> per 7.2;\nwe only allow VALUES ... in INSERT, whereas SQL allows it in other\nconstructs where a sub-SELECT would be legal, and we don't accept\nmultiple rows in VALUES. For example, you should be able to write\n\n\tINSERT INTO t VALUES (1,2,3), (4,5,6), (7,8,9), ...\n\nbut we don't accept that now. The spec also shows several examples like\n\n CONSTRAINT DOMAIN_CONSTRAINTS_CHECK_DEFERRABLE\n CHECK ( ( IS_DEFERRABLE, INITIALLY_DEFERRED ) IN\n ( VALUES ( 'NO', 'NO' ),\n ( 'YES', 'NO' ),\n ( 'YES', 'YES' ) ) )\n\n\nThanks for digging through the spec ... I bet that was tedious ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 12:16:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance "
},
{
"msg_contents": "On 2000-02-19, Tom Lane mentioned:\n\n> > * INSERT INTO table DEFAULT VALUES [13.8]\n> > {Looks like a grammar fix as well.}\n> \n> Huh? We do have DEFAULT VALUES --- what is wrong exactly?\n\nNot documented. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 20 Feb 2000 03:37:33 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance "
},
{
"msg_contents": "On 2000-02-19, Tom Lane mentioned:\n\n> What we don't seem to have is full <table value constructor> per 7.2;\n> we only allow VALUES ... in INSERT, whereas SQL allows it in other\n> constructs where a sub-SELECT would be legal,\n\nNot required by Intermediate Level.\n\n> and we don't accept\n> multiple rows in VALUES. For example, you should be able to write\n> \n> \tINSERT INTO t VALUES (1,2,3), (4,5,6), (7,8,9), ...\n> \n> but we don't accept that now.\n\nNot required either.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 20 Feb 2000 15:22:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Can you give me a failure condition for the TODO list? I can't see the\n> > bug here.\n> \n> Well, now that I got off my duff and tried your little test with my\n> current sources, I get your result. Hannu??\n\nMy tests were with 6.5.3 which has even more yuckies in it :\n\n[hannu@hu hannu]$ psql -c \"select -- what ? \n> count(*) from t1;\"\nERROR: attribute 'what' not found\n[hannu@hu hannu]$ psql -c \"select -- what \n> count(*) from t1;\"\nERROR: parser: parse error at or near \"count\"\n[hannu@hu hannu]$ psql -c \"select count(*) -- what\n> from t1;\"\ncount\n-----\n 3\n(1 row)\n\n\nBut they all work from psql\n\nhannu=> select -- what ?\nhannu-> count(*) from t1;\ncount\n-----\n 3\n(1 row)\n\nhannu=> select count(*) -- what ?\nhannu-> from t1;\ncount\n-----\n 3\n(1 row)\n\n\nCould you try them on current.\n\n-------------\nHannu\n",
"msg_date": "Sun, 20 Feb 2000 17:49:56 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n level?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Can you give me a failure condition for the TODO list? I can't see the\n> > bug here.\n> \n> Well, now that I got off my duff and tried your little test with my\n> current sources, I get your result. Hannu??\n\nBtw, how did you try them ?\n\nCould it be that psql is now stripping comments even in non-interactive \nmode (when using -c or redirected stdin)?\n\nCould you test with some other frontend (python, perl, tcl, C) ?\n\n------------\nHannu\n",
"msg_date": "Sun, 20 Feb 2000 17:52:43 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n level?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-02-19, Tom Lane mentioned:\n>> What we don't seem to have is full <table value constructor> per 7.2;\n>> we only allow VALUES ... in INSERT, whereas SQL allows it in other\n>> constructs where a sub-SELECT would be legal,\n\n> Not required by Intermediate Level.\n\nNo, but it's useful enough that we should have it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Feb 2000 11:34:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance "
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Could you test with some other frontend (python, perl, tcl, C) ?\n\nYup, psql is untrustworthy as a means of testing the backend's comment\nhandling ;-).\n\nI committed lexer changes on Friday evening that I believe fix all of\nthe backend's problems with \\r versus \\n. The issue with unterminated\n-- comments, which was Hannu's original complaint, was fixed awhile ago;\nbut we still had problems with comments terminated with \\r instead of\n\\n, as well as some non-SQL-compliant behavior for -- comments between\nthe segments of a multiline literal, etc etc.\n\nWhile fixing this I realized that there are some fundamental\ndiscrepancies between the way the backend recognizes comments and the\nway that psql does. These arise from the fact that the comment\nintroducer sequences /* and -- are also legal as parts of operator\nnames, and since the backend is based on lex which uses greedy longest-\navailable-match rules, you get things like this:\n\nselect *-- 123\nERROR: Can't find left op '*--' for type 23\n\n(Parsing '*--' as an operator name wins over parsing just '*' as an\noperator name, so that '--' would be recognized on the next call.)\nMore subtly,\n\nselect /**/- 22\nERROR: parser: parse error at or near \"\"\n\nwhich is the backend's rather lame excuse for an \"unterminated comment\"\nerror. What happens here is that the sequence /**/- is bit off as a\nsingle lexer token, then tested in this order to see if it is\n\t(a) a complete \"/* ... */\" comment (nope),\n\t(b) the start of a comment, \"/* anything\" (yup), or\n\t(c) an operator (which would succeed if it got the chance).\nThere does not seem to be any way to persuade lex to stop at the \"*/\"\nif it has a chance to recognize a longer token by applying the operator\nrule.\n\nBoth of these problems are easily avoided by inserting some whitespace,\nbut I wonder whether we ought to try to fix them for real. One way\nthat this could be done would be to alter the lexer rules so that\noperators are lexed a single character at a time, which'd eliminate\nlex's tendency to recognize a long operator name in place of a comment.\nThen we'd need a post-pass to recombine adjacent operator characters into\na single token. (This would forever prevent anyone from using operator\nnames that include '--' or '/*', but I'm not sure that's a bad thing.)\nThe post-pass would also be a mighty convenient place to fix the NOT NULL\nproblem that's giving us trouble in another thread: the post-pass would\nneed one-token lookahead anyway, so it could very easily convert NOT\nfollowed by NULL into a single special token.\n\nMeanwhile, psql is using some ad-hoc code to recognize comments,\nrather than a lexer, and it thinks both of these sequences are indeed\ncomments. I also find that it strips out the -- flavor of comment,\nbut sends the /* */ flavor on through, which is just plain inconsistent.\nI suggest we change psql to not strip -- comments either. The only\nreason for psql to be in the comment-recognition business at all is\nso that it can determine whether a semicolon is end-of-query or just\na character in a comment.\n\nAnother thing I'd like to fix here is to get the backend to produce\na more useful error message than 'parse error at or near \"\"' when it's\npresented with an unterminated comment or unterminated literal.\nThe flex manual recommends coding like\n\n <quote><<EOF>> {\n error( \"unterminated quote\" );\n yyterminate();\n }\n\nbut <<EOF>> is a flex-ism not supported by regular lex. We already\ntell people they have to use flex (though I'm not sure that's *really*\nnecessary at present); do we want to set that requirement in stone?\nOr does anyone know another way to get this effect?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Feb 2000 12:41:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n level?"
},
{
"msg_contents": "On 2000-02-20, Tom Lane mentioned:\n\n> select *-- 123\n> ERROR: Can't find left op '*--' for type 23\n\n> select /**/- 22\n> ERROR: parser: parse error at or near \"\"\n\nI believe that these things (certainly the first one) could be fixed by\nmaking the {operator} rule in scan.l rescanning yytext for \"--\" or \"/*\"\n(using string functions) and if found putting part of the token back in\nthe input stream using yyless().\n\n> Meanwhile, psql is using some ad-hoc code to recognize comments,\n> rather than a lexer, and it thinks both of these sequences are indeed\n> comments.\n\nIncidentally, it's right. ;)\n\n> I suggest we change psql to not strip -- comments either.\n\nThat sounds reasonable, although we had a painful discussion about this\nlast fall, I recall, that ended with me leaving it like that. If someone\nwants to bother, be my guest. One of these days, psql should get a lexer\nto fix some other parsing problems as well.\n\n> but <<EOF>> is a flex-ism not supported by regular lex.\n\nExclusive start conditions are not supported by regular lex either. We\nlose. Sometimes I think we're actually doing people a favour by requiring\nflex, because then they don't have to deal with incarnations like Sun's.\n\nIf you want to catch unbalanced quotes at the end of input, I could\nimagine that some grand unified evilness via yywrap setting some global\nflag or two might get the job done.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 00:57:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n level?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-02-20, Tom Lane mentioned:\n>> select *-- 123\n>> ERROR: Can't find left op '*--' for type 23\n\n> I believe that these things (certainly the first one) could be fixed by\n> making the {operator} rule in scan.l rescanning yytext for \"--\" or \"/*\"\n> (using string functions) and if found putting part of the token back in\n> the input stream using yyless().\n\nI think you are right, that would work. Is yyless flex-specific or a\ngeneric lex feature?\n\nThe intermediate-lookahead-buffer solution might still be better, if it\nlets us solve more problems than just this one. I'm inclined to not\ndo anything until Thomas decides what he wants to do about the NOT NULL\nbusiness.\n\n>> but <<EOF>> is a flex-ism not supported by regular lex.\n\n> Exclusive start conditions are not supported by regular lex either.\n\nOooh, somehow I managed to completely miss that statement in the flex\nmanual, but you are right. Hmm. I think that shoots a gaping hole in\nmy desire to have scan.l work with plain lex. Offhand I don't see a\ngood way to avoid using exclusive start conditions for multi-section\nliterals.\n\n> If you want to catch unbalanced quotes at the end of input, I could\n> imagine that some grand unified evilness via yywrap setting some global\n> flag or two might get the job done.\n\nRight at the moment I'm thinking we might as well use <<EOF>>, which\nis after all the recommended way of doing it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 23:19:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: SQL compliance - why -- comments only at psql\n\tlevel?"
},
{
"msg_contents": "> > I've since seen the article in the latest issue of PCWeek. The article\n> > was not at all clear on the *specific* features which would disqualify\n> > Postgres from having SQL92 entry level compliance\n> I dug through the standard to come up with a list [ of missing features ].\n> * TIME and TIMESTAMP WITH TIMEZONE missing [6.1]\n\nTIMESTAMP WITH TIME ZONE is already available (and was for v6.5.x\ntoo). I'll add syntax to allow TIME WITH TIME ZONE for v7.0.\n\n> * SOME / ANY doesn't seem to exist [8.7]\n> \n> * Grant privileges have several deficiencies [10.3, 11.36]\n> \n> * Schemas [11.1, 11.2]\n> \n> * CREATE VIEW name (x, y, z) doesn't work [11.19]\n> \n> * There's a WITH CHECK OPTION clause for CREATE VIEW [11.19]\n> \n> * no OPEN statement [13.2]\n> \n> * FETCH syntax has a few issues [13.3]\n> \n> * SELECT x INTO a, b, c table [13.5]\n> \n> * DELETE WHERE CURRENT OF [13.6]\n> \n> * INSERT INTO table DEFAULT VALUES [13.8]\n> {Looks like a grammar fix as well.}\n> \n> * UPDATE WHERE CURRENT OF [13.9]\n> \n> * no SQLSTATE, SQLCODE [22.1, 22.2]\n> {Not sure about that one, since the sections don't contain leveling\n> information.}\n> \n> * default transaction isolation level is SERIALIZABLE\n> {Why isn't ours?}\n> \n> * no autocommit in SQL\n> \n> * modules? [12]\n> \n> * Some type conversion problems. For example a DECIMAL field should not\n> dump out as NUMERIC, and a FLOAT(x) field should be stored as such.\n> \n> [* Haven't looked at Embedded SQL.]\n> \n> That's it. :)\n> \n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Mar 2000 14:48:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL compliance"
}
] |
[
{
"msg_contents": "I tried fixing some of the known problems with comparison of INET values\n(cf. thread \"uniqueness not always correct\" on 11/11/99, among others),\nand was surprised to discover that my changes affected the results of\nthe inet regress test. Specifically, the regress test exercises all the\ninet comparison operators on the two data values\n\t'10.1.2.3/8'::inet '10.0.0.0/32'::cidr\nThe old code believes that the first of these is greater, while my\nrevised code thinks the second is greater.\n\nNow, my understanding of things is that '10.1.2.3/8' is just an\nunreasonably verbose way of writing '10/8', because if you write /8\nyou are saying that only the first 8 bits mean anything. So it seems\nto me that we are really comparing '10/8' and '10.0.0.0/32', and the\nformer should be considered the lesser in the same way that 'ab'\ncomes before 'abc' in dictionaries.\n\nIs the regress test's expected output wrong, or have I missed\nsomething?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 02:25:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Definitional issue for INET types"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Now, my understanding of things is that '10.1.2.3/8' is just an\n> unreasonably verbose way of writing '10/8', because if you write /8\n> you are saying that only the first 8 bits mean anything. \n\nNot really. In a classed view on a network, the /8 is undefined - and\nworse, there is no real concept of a address consisting of a \nnetwork/netmask tuple. /8 might imply that 10.1.2.3 is in the class A\nsegment, it might be considered a 255.0.0.0 netmask with any possible\ninterpretation of the latter, or it might be entirely ignored. For\n::cidr vs. ::cidr the answer is clear - apply the masks and match then,\nwhich would make 10/8 lesser by all means.\n\n> So it seems\n> to me that we are really comparing '10/8' and '10.0.0.0/32', and the\n> former should be considered the lesser in the same way that 'ab'\n> comes before 'abc' in dictionaries.\n> \n> Is the regress test's expected output wrong, or have I missed\n> something?\n\nTough question. There are some nasty details differing between classed\nnetwork notation and CIDR notation, and we certainly cannot reconcile\nthem all in operators. As the significant digits are meaningless in\nclassed notation, they might either be ignored or interpreted according\nto any rule applying to classed netmasks, which really depends on the\ncontext of the network device - a router, firewall or audit tool might\neach have different semantics and requirements. \n\nI'll see whether I can figure out something consistent for the inet data\ntype. As it is right now, we might just as well drop it - it is both\nsynonymous to cidr and to a cidr /32 host, which simply can't be.\nPersonally, I don't think we would lose any functionality if we drop it,\nas long as we have functions that return classed network structures like\nthe base address and a networks subnettable range. \n\nSevo\n\n\n-- \nSevo Stille\[email protected]\n",
"msg_date": "Thu, 17 Feb 2000 13:07:59 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types"
},
{
"msg_contents": "On Thu, 17 Feb 2000, Tom Lane wrote:\n\n> \t'10.1.2.3/8'::inet '10.0.0.0/32'::cidr\n> The old code believes that the first of these is greater, while my\n> revised code thinks the second is greater.\n\nI think we can flip a three-sided coin here:\n\n1) '10.1.2.3/8'::inet is not a valid inet input value, sort of in the same\nway as 10.5 is not a valid integer.\n\n2) You coerce '10.1.2.3/8'::inet to essentially '10.0.0.0/8'::inet on\ninput. (In some parts, implicit data mangling that loses information is\nnot considered good practice.)\n\n3) You can't compare inet and cidr because they're two different (albeit\nsimilar) things. If you want to compare them you have to explicitly cast\ninet to cidr or vice versa according to 1) or 2).\n\nIn any case I believe you revised code has a very good point.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 13:10:49 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 3) You can't compare inet and cidr because they're two different (albeit\n> similar) things. If you want to compare them you have to explicitly cast\n> inet to cidr or vice versa according to 1) or 2).\n\nThis might in fact be the right answer --- maybe CIDR and INET should\nhave different comparison semantics. Right now the two types seem to\nshare exactly the same operators, which makes me wonder why we have\nboth.\n\nI don't suppose Paul Vixie is still reading this list. Someone should\ncontact him and ask where we went wrong. Who was our point man on the\nnetwork types to begin with?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 10:41:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types "
},
{
"msg_contents": "Sevo Stille <[email protected]> writes:\n> I'll see whether I can figure out something consistent for the inet data\n> type. As it is right now, we might just as well drop it - it is both\n> synonymous to cidr and to a cidr /32 host, which simply can't be.\n> Personally, I don't think we would lose any functionality if we drop it,\n> as long as we have functions that return classed network structures like\n> the base address and a networks subnettable range. \n\nHmm. One way to throw the question into stark relief is to ask:\nIs '10/8' *equal to* '10.0.0.0/32', in the sense that unique indexes\nand operations like SELECT DISTINCT should consider them identical?\nDoes your answer differ depending on whether you assume the values\nare of CIDR or INET type?\n\nOnce we have decided if they are equal or not, we can certainly manage\nto come up with a sort ordering for the cases that are not equal.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 10:49:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types "
},
{
"msg_contents": "Tom Lane wrote:\n \n> Hmm. One way to throw the question into stark relief is to ask:\n> Is '10/8' *equal to* '10.0.0.0/32', in the sense that unique indexes\n> and operations like SELECT DISTINCT should consider them identical?\n> Does your answer differ depending on whether you assume the values\n> are of CIDR or INET type?\n\nWell, in a CIDR context, they positively are different, '10.0.0.0/32' is\na host, and '10/8' is a network, and our application would positively\ntreat either entirely different. CIDR consistently works by\napply-mask-and-process. \n\nIn a INET context, the answer is not that easy, as net and mask have no\ndefined behaviour as a tuple. The mask will in some cases be a\nindependent entity, which presumably is why Paul Vixie made meaningless\nnet/mask combinations legal there. If INET is used to store e.g. a\nCisco wildcard value, the /xx part is meaningless - however, 10.1.2.3/8\nwould not be a shorthand for 10/8 then. \n\nAs far as ipmeter is concerned, we found out that INET is of no use for\nus - even if there are some strange things you might do with odd\nnet/mask patterns, few of them follow any easily defined paradigm.\nPersonally, I am all for dropping INET, or for defining it to be\nmaskless (which could be done by forcing /32 for it).\n\nSevo\n",
"msg_date": "Thu, 17 Feb 2000 17:11:28 +0100",
"msg_from": "Sevo Stille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types"
},
{
"msg_contents": "Sevo Stille <[email protected]> writes:\n>> Hmm. One way to throw the question into stark relief is to ask:\n>> Is '10/8' *equal to* '10.0.0.0/32', in the sense that unique indexes\n>> and operations like SELECT DISTINCT should consider them identical?\n>> Does your answer differ depending on whether you assume the values\n>> are of CIDR or INET type?\n\n> Well, in a CIDR context, they positively are different, '10.0.0.0/32' is\n> a host, and '10/8' is a network, and our application would positively\n> treat either entirely different. CIDR consistently works by\n> apply-mask-and-process. \n\nOK. Now let's try you on this one: true or false?\n\t'10.1.2.3/8'::cidr = '10/8'::cidr\n\n(which was actually the example I meant to ask about above, but\ncarelessly cut-and-pasted the wrong values :-(.)\n\n> In a INET context, the answer is not that easy, as net and mask have no\n> defined behaviour as a tuple. The mask will in some cases be a\n> independent entity, which presumably is why Paul Vixie made meaningless\n> net/mask combinations legal there.\n\nI think that was the idea, all right, which would seem to suggest that\nwe ought to compare all the bits of the IP addresses, and then compare\nthe bitcounts (since the bitcount is just a compact representation of a\nlogically separate netmask, and has nothing to do with the validity of\nthe IP address). But I'm not sure whether this holds good for CIDR too.\n\n> Personally, I am all for dropping INET, or for defining it to be\n> maskless (which could be done by forcing /32 for it).\n\nIf you don't need a mask, leave out the /32 --- or even add a column\nconstraint requiring it to be 32. I don't see that it's necessary\nto tell other people that they can't have a mask. CIDR may be a\ndifferent story however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Feb 2000 11:38:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Definitional issue for INET types "
}
] |
[
{
"msg_contents": "Some people have indicated that they don't like how psql currently handles\nControl-C if no query is in progress. I consider the behaviour of the\nshells desirable but, quite frankly, I don't know how to do it.\n\nFor some reason a readline'd session always wants me to press one more key\nafter Control-C before getting back to a clean prompt. A much bigger\nproblem is that if I don't use/have readline then I don't see a way to\npreempt the fgets() call.\n\nSo unless someone has a hint or wants to look at it, I could offer\nignoring the signal altogether in interactive mode, and perhaps make it\nstop scripts in the other case. (Leaving the query cancelling as is, of\ncourse.)\n\nActually, shouldn't a Ctrl-C in a script cancel the query *and* stop the\nscript at all times?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 17 Feb 2000 18:05:31 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql and Control-C"
},
{
"msg_contents": "Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Some people have indicated that they don't like how psql currently handles\n> Control-C if no query is in progress. I consider the behaviour of the\n> shells desirable but, quite frankly, I don't know how to do it.\n> \n> For some reason a readline'd session always wants me to press one more key\n> after Control-C before getting back to a clean prompt. A much bigger\n> problem is that if I don't use/have readline then I don't see a way to\n> preempt the fgets() call.\n> \n> So unless someone has a hint or wants to look at it, I could offer\n> ignoring the signal altogether in interactive mode, and perhaps make it\n> stop scripts in the other case. (Leaving the query cancelling as is, of\n> course.)\n> \n> Actually, shouldn't a Ctrl-C in a script cancel the query *and* stop the\n> script at all times?\n\nSeems we can just ignore ^C if a query is not being run. Is that OK\nwith everyone. Looks easy to do.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Thu, 17 Feb 2000 15:25:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Some people have indicated that they don't like how psql currently handles\n> Control-C if no query is in progress. I consider the behaviour of the\n> shells desirable but, quite frankly, I don't know how to do it.\n\nThe typical way to do this sort of thing is to longjmp back to the main\nloop. And I think if you look at sig.c in bash, this is probably what\nthey are doing.\n\n> Actually, shouldn't a Ctrl-C in a script cancel the query *and* stop the\n> script at all times?\n\nYes.\n",
"msg_date": "Fri, 18 Feb 2000 10:34:09 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "\n> Seems we can just ignore ^C if a query is not being run. Is that OK\n> with everyone. Looks easy to do.\n\nIt would be a trap for new users (some old ones too) who may not know\nhow to escape. longjmp should be easy too, if it works.\n",
"msg_date": "Fri, 18 Feb 2000 11:01:41 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "> \n> > Seems we can just ignore ^C if a query is not being run. Is that OK\n> > with everyone. Looks easy to do.\n> \n> It would be a trap for new users (some old ones too) who may not know\n> how to escape. longjmp should be easy too, if it works.\n\nIf they don't know ^D exits, they really are going to have trouble with\nUnix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 17 Feb 2000 19:11:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "On 2000-02-18, Chris Bitmead mentioned:\n\n> The typical way to do this sort of thing is to longjmp back to the main\n> loop. And I think if you look at sig.c in bash, this is probably what\n> they are doing.\n\nDon't wanna look at that GPL'd code ... ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 01:25:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "* Chris Bitmead <[email protected]> [000217 16:20] wrote:\n> Peter Eisentraut wrote:\n> > \n> > Some people have indicated that they don't like how psql currently handles\n> > Control-C if no query is in progress. I consider the behaviour of the\n> > shells desirable but, quite frankly, I don't know how to do it.\n> \n> The typical way to do this sort of thing is to longjmp back to the main\n> loop. And I think if you look at sig.c in bash, this is probably what\n> they are doing.\n> \n> > Actually, shouldn't a Ctrl-C in a script cancel the query *and* stop the\n> > script at all times?\n> \n> Yes.\n\nWhoa whoa... It's a bit more complicated than you think, there's a lot\nof state that gets put into libpq, i guess the simplest way would be\nto do so and also cancel the transaction, but a simple longjump won't\nwork reliably and you'd also have to take very careful steps to make\nsure you handle everything _just right_ from a signal context.\n\nI'd rather have the inconvience of psql exiting than a not entirely \nthought out mechanism for doing this properly potentially having psql\nrun amok on my database. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n",
"msg_date": "Thu, 17 Feb 2000 16:33:25 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > > Seems we can just ignore ^C if a query is not being run. Is that OK\n> > > with everyone. Looks easy to do.\n> >\n> > It would be a trap for new users (some old ones too) who may not know\n> > how to escape. longjmp should be easy too, if it works.\n> \n> If they don't know ^D exits, they really are going to have trouble with\n> Unix.\n\nI mean escape from a half-typed in query, not escape from psql\naltogether.\n",
"msg_date": "Fri, 18 Feb 2000 12:19:51 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> On 2000-02-18, Chris Bitmead mentioned:\n> \n> > The typical way to do this sort of thing is to longjmp back to the main\n> > loop. And I think if you look at sig.c in bash, this is probably what\n> > they are doing.\n> \n> Don't wanna look at that GPL'd code ... ;)\n\nIf you don't know how to interact with readline, I think you're gonna\nhave to look at some GPL code ;)\n",
"msg_date": "Fri, 18 Feb 2000 12:20:55 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Alfred Perlstein wrote:\n\n> Whoa whoa... It's a bit more complicated than you think, there's a lot\n> of state that gets put into libpq,\n\nI don't think this has anything to do with libpq. This has got to do\nwith\npsql's reading of commands _before_ they get shoved into libpq. As such\nit shouldn't be that dangerous.\n\n> i guess the simplest way would be\n> to do so and also cancel the transaction, but a simple longjump won't\n> work reliably and you'd also have to take very careful steps to make\n> sure you handle everything _just right_ from a signal context.\n> \n> I'd rather have the inconvience of psql exiting than a not entirely\n> thought out mechanism for doing this properly potentially having psql\n> run amok on my database. :)\n> \n> --\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \n> ************\n",
"msg_date": "Fri, 18 Feb 2000 12:23:25 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "* Chris Bitmead <[email protected]> [000217 17:56] wrote:\n> Peter Eisentraut wrote:\n> > \n> > On 2000-02-18, Chris Bitmead mentioned:\n> > \n> > > The typical way to do this sort of thing is to longjmp back to the main\n> > > loop. And I think if you look at sig.c in bash, this is probably what\n> > > they are doing.\n> > \n> > Don't wanna look at that GPL'd code ... ;)\n> \n> If you don't know how to interact with readline, I think you're gonna\n> have to look at some GPL code ;)\n\nActually, FreeBSD(*) has 'libedit' which is pretty nice, it could\nbe made to work under other systems and has a nice unencumbered\nlicense. :)\n\n-Alfred\n\n(*)\n * Copyright (c) 1992, 1993\n * The Regents of the University of California. All rights reserved.\n *\n * This code is derived from software contributed to Berkeley by\n * Christos Zoulas of Cornell University.\n(*)\n",
"msg_date": "Thu, 17 Feb 2000 18:21:28 -0800",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Alfred Perlstein wrote:\n>\n> Actually, FreeBSD(*) has 'libedit' which is pretty nice, it could\n> be made to work under other systems and has a nice unencumbered\n> license. :)\n\nAs an option - fine, but most things these days use readline, and\nI would want my ~/.inputrc to work with all apps the same way.\n",
"msg_date": "Fri, 18 Feb 2000 16:09:29 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Alfred Perlstein wrote:\n>> Whoa whoa... It's a bit more complicated than you think, there's a lot\n>> of state that gets put into libpq,\n\n> I don't think this has anything to do with libpq. This has got to do\n> with psql's reading of commands _before_ they get shoved into\n> libpq. As such it shouldn't be that dangerous.\n\nChris is right that this is not a libpq issue. psql would be set up\nso that the signal-catching routine either issues a cancel request\n(if a query is in progress) or attempts a longjmp (if not). If\nproperly implemented, there is zero chance of screwing up libpq.\nHowever, there is some chance of screwing up libreadline --- I don't\nknow enough about its innards to know if it can survive losing\ncontrol at a random point. If we can confine the region where longjmp\nwill be attempted to just the point where the program is blocked\nwaiting for user input, it'd probably be pretty safe.\n\nSomething I've noticed that might or might not be related to this\nissue is that if psql exits due to backend crash, it fails to save the\nlast few lines of command history into the history file. Not closing\ndown libreadline, maybe?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 00:12:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Something I've noticed that might or might not be related to this\n> issue is that if psql exits due to backend crash, it fails to save the\n> last few lines of command history into the history file. Not closing\n> down libreadline, maybe?\n\nThis is actually the gnu history library. Currently psql issues a save\ncommand when you exit or when you enter \\s. I'm not sure why gnu\nhistory appears to require you to write the whole history at once\nrather than appending to a file but anyway...\n\nA better way of doing it might be to have psql's finishInput() passed\nto atexit(), just to make sure it's always called, and have\nsignal handlers in place for *every* signal whose handlers call exit(),\nso finishing up is done neatly.\n\nIt might even be worthwhile to write the history after every command.\n\nBTW, is it necessary to print \"\\q\" when you hit ctrl-d ? Seems just a\nlittle tacky to me.\n",
"msg_date": "Fri, 18 Feb 2000 17:06:48 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "On Fri, 18 Feb 2000, Chris Bitmead wrote:\n\n> BTW, is it necessary to print \"\\q\" when you hit ctrl-d ? Seems just a\n> little tacky to me.\n\nThat's similar to what bash and tcsh do. If you don't like it, I'm not\nmarried to it.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:16:08 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "On Fri, 18 Feb 2000, Tom Lane wrote:\n\n> However, there is some chance of screwing up libreadline --- I don't\n> know enough about its innards to know if it can survive losing\n> control at a random point. If we can confine the region where longjmp\n> will be attempted to just the point where the program is blocked\n> waiting for user input, it'd probably be pretty safe.\n\nReadline has an official way to preempt is, namely setting rl_done to\nnon-zero. I can take a look how it copes with a longjmp from a signal\nhandler, but I wouldn't set my hopes too high.\n\n> Something I've noticed that might or might not be related to this\n> issue is that if psql exits due to backend crash, it fails to save the\n> last few lines of command history into the history file. Not closing\n> down libreadline, maybe?\n\nAs someone else pointed out, I might as well move write_history() into an\natexit hook.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:19:07 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C "
},
{
"msg_contents": "On Thu, 17 Feb 2000, Alfred Perlstein wrote:\n\n> Actually, FreeBSD(*) has 'libedit' which is pretty nice, it could\n> be made to work under other systems and has a nice unencumbered\n> license. :)\n\nSomeone else mentioned this to me when I started on this. However, there's\nnot even a common version among the *BSD's, let alone ports to other\nplatforms, so I don't see this happening anytime soon.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:20:22 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "On Fri, 18 Feb 2000, Chris Bitmead wrote:\n\n> If you don't know how to interact with readline, I think you're gonna\n> have to look at some GPL code ;)\n\nThere's a difference between copying ideas and code from bash's sig.c (not\ngood) and snooping around in code to see how to interface with it (why\nnot).\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:21:52 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "On Fri, 18 Feb 2000, Chris Bitmead wrote:\n\n> I mean escape from a half-typed in query, not escape from psql\n> altogether.\n\nIf you don't read the documentation before running a program, I can't help\nyou. Also, the welcome message points out both \\? and \\q.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 18 Feb 2000 15:23:05 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Fri, 18 Feb 2000, Tom Lane wrote:\n>> However, there is some chance of screwing up libreadline --- I don't\n>> know enough about its innards to know if it can survive losing\n>> control at a random point. If we can confine the region where longjmp\n>> will be attempted to just the point where the program is blocked\n>> waiting for user input, it'd probably be pretty safe.\n\n> Readline has an official way to preempt is, namely setting rl_done to\n> non-zero. I can take a look how it copes with a longjmp from a signal\n> handler, but I wouldn't set my hopes too high.\n\nOh? Maybe we don't *need* a longjmp: maybe the signal handler just\nneeds to do either send-a-cancel or set-rl_done depending on the\ncurrent state of a flag that's set by the main line code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 10:23:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C "
},
{
"msg_contents": "On 2000-02-18, Tom Lane mentioned:\n\n> > Readline has an official way to preempt is, namely setting rl_done to\n> > non-zero. I can take a look how it copes with a longjmp from a signal\n> > handler, but I wouldn't set my hopes too high.\n> \n> Oh? Maybe we don't *need* a longjmp: maybe the signal handler just\n> needs to do either send-a-cancel or set-rl_done depending on the\n> current state of a flag that's set by the main line code.\n\nI tried that but it doesn't work. On further thought I believe that the\npurpose of rl_done is for readline extensions, so that, for example, a\nsemicolon handler can scan the current line and then immediately return as\nif you had pressed enter. When idle, readline hangs on read(), so setting\nsome variable isn't going to interrupt that.\n\nThe longjmp seems to work but I need to test it more. I'm concerned how it\nwill work across platforms, esp. Windows (being a POSIX thing). Should\nthere be a configure test or can I assume it on every non-WIN32 system?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Sat, 19 Feb 2000 15:12:36 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] psql and Control-C "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The longjmp seems to work but I need to test it more. I'm concerned how it\n> will work across platforms, esp. Windows (being a POSIX thing). Should\n> there be a configure test or can I assume it on every non-WIN32 system?\n\nlongjmp predates POSIX by an eon or two. I doubt you need to worry about\nit on Unix platforms. (Since we utterly rely on it in the backend,\nPostgres wouldn't ever work on a platform without it anyway.)\n\nLess sure about the situation on Windows or Mac, but configure isn't\ngoing to help for those anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 12:01:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> On Fri, 18 Feb 2000, Chris Bitmead wrote:\n> \n> > I mean escape from a half-typed in query, not escape from psql\n> > altogether.\n> \n> If you don't read the documentation before running a program, I can't help\n> you. Also, the welcome message points out both \\? and \\q.\n\nI think it takes quite a while to realise that you can intersperse a\nbackslash command anywhere. like..\n\nselect * from\n\\q\n\nwhich is a bit different to shell where\nwhile :\nexit\n\nwill not have the desired effect.\n",
"msg_date": "Mon, 21 Feb 2000 10:10:08 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] psql and Control-C"
},
{
"msg_contents": "Marc called me today to discuss ALTER TABLE DROP COLUMN options.\n\nOur new idea is to do the ALTER TABLE DROP COLUMN in place in the\nexisting table, rather than make a new one and try and preserve all the\ntable attributes.\n\nYou can exclusively lock the table, then do a heap_getnext() scan over\nthe entire table, remove the dropped column, do a heap_insert(), then a\nheap_delete() on the current tuple, making sure to skip over the tuples\ninserted by the current transaction. When completed, remove the column\nfrom pg_attribute, mark the transaction as committed (if desired), and\nrun vacuum over the table to remove the deleted rows.\n\nSeems this would be a very clean implementation for 7.1. It also would\nbe roll-backable in cases where the operation failed half-way during the\nprocess.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 25 Feb 2000 23:12:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> You can exclusively lock the table, then do a heap_getnext() scan over\n> the entire table, remove the dropped column, do a heap_insert(), then a\n> heap_delete() on the current tuple, making sure to skip over the tuples\n> inserted by the current transaction. When completed, remove the column\n> from pg_attribute, mark the transaction as committed (if desired), and\n> run vacuum over the table to remove the deleted rows.\n\nHmm, that would work --- the new tuples commit at the same instant that\nthe schema updates commit, so it should be correct. You have the 2x\ndisk usage problem, but there's no way around that without losing\nrollback ability.\n\nA potentially tricky bit will be persuading the tuple-reading and tuple-\nwriting subroutines to pay attention to different versions of the tuple\nstructure for the same table. I haven't looked to see if this will be\ndifficult or not. If you can pass the TupleDesc explicitly then it\nshouldn't be a problem.\n\nI'd suggest that the cleanup vacuum *not* be an automatic part of\nthe operation; just recommend that people do it ASAP after dropping\na column. Consider needing to drop several columns...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Feb 2000 01:01:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > You can exclusively lock the table, then do a heap_getnext() scan over\n> > the entire table, remove the dropped column, do a heap_insert(), then a\n> > heap_delete() on the current tuple, making sure to skip over the tuples\n> > inserted by the current transaction. When completed, remove the column\n> > from pg_attribute, mark the transaction as committed (if desired), and\n> > run vacuum over the table to remove the deleted rows.\n> \n> Hmm, that would work --- the new tuples commit at the same instant that\n> the schema updates commit, so it should be correct. You have the 2x\n> disk usage problem, but there's no way around that without losing\n> rollback ability.\n> \n> A potentially tricky bit will be persuading the tuple-reading and tuple-\n> writing subroutines to pay attention to different versions of the tuple\n> structure for the same table. I haven't looked to see if this will be\n> difficult or not. If you can pass the TupleDesc explicitly then it\n> shouldn't be a problem.\n> \n> I'd suggest that the cleanup vacuum *not* be an automatic part of\n> the operation; just recommend that people do it ASAP after dropping\n> a column. Consider needing to drop several columns...\n\nDoes SQL92 syntax allow dropping several columns, i.e.\n\nALTER TABLE mytable DROP COLUMN col1,col5,col6;\n\nIf it does, it would be very desirable to implement it to avoid the need \nfor vacuum between each DROP in order to have _only_ 2X disk usage.\n\n-----------\nHannu\n",
"msg_date": "Sun, 27 Feb 2000 20:06:59 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 08:06 PM 2/27/00 +0200, Hannu Krosing wrote:\n\n>Does SQL92 syntax allow dropping several columns, i.e.\n>\n>ALTER TABLE mytable DROP COLUMN col1,col5,col6;\n\nMy reading of the syntax says no, it is not allowed.\n\n>If it does, it would be very desirable to implement it to avoid the need \n>for vacuum between each DROP in order to have _only_ 2X disk usage.\n\nHowever, implementing useful extensions to the standard in an\nupward-compatible way doesn't bother me.\n\nI'm not fond of language implementations that are full of gratuitous\nextensions, but when extensions address real shortcomings in a standard\nor intersect with a particular implementation in a useful way, then\nit makes sense to add them. In this case, you're asking for an\nextension that's useful because Postgres doesn't reclaim storage when\na tuple's deleted, but only when the table's vacuumed. Seems fair\nenough.\n\nWhether or not it would be hard to implement is another matter...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 27 Feb 2000 10:17:52 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> You can exclusively lock the table, then do a heap_getnext() scan over\n> the entire table, remove the dropped column, do a heap_insert(), then a\n> heap_delete() on the current tuple,\n\nWow, that almost seems to easy to be true. I never thought that having\ntuples of different structures in the table at the same time would be\npossible. If so then I don't see a reason why this would be too hard to\ndo.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Mon, 28 Feb 2000 00:54:34 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Bruce Momjian writes:\n> \n> > You can exclusively lock the table, then do a heap_getnext() scan over\n> > the entire table, remove the dropped column, do a heap_insert(), then a\n> > heap_delete() on the current tuple,\n> \n> Wow, that almost seems to easy to be true. I never thought that having\n> tuples of different structures in the table at the same time would be\n> possible. If so then I don't see a reason why this would be too hard to\n> do.\n\nIf the transaction is not committed, I don't think anything actually\nreads the tuple columns, so you are safe.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 27 Feb 2000 19:17:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n>\n> [Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > Bruce Momjian writes:\n> >\n> > > You can exclusively lock the table, then do a heap_getnext() scan over\n> > > the entire table, remove the dropped column, do a\n> heap_insert(), then a\n> > > heap_delete() on the current tuple,\n> >\n> > Wow, that almost seems to easy to be true. I never thought that having\n> > tuples of different structures in the table at the same time would be\n> > possible. If so then I don't see a reason why this would be too hard to\n> > do.\n>\n> If the transaction is not committed, I don't think anything actually\n> reads the tuple columns, so you are safe.\n>\n\nHmm,tuples of multiple version in a table ?\nThis is neither clean nor easy for me.\nThere's no such stuff which takes the case into account,AFAIK.\n\nSeems no one but me object to it. I'm tired of this issue and it's\npainful for me to continue discussion further in my poor English.\nI may be able to provide another implementation on trial and it\nmay be easier than only objecting to your proposal.\nIs it OK ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 28 Feb 2000 11:00:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,tuples of multiple version in a table ?\n> This is neither clean nor easy for me.\n\nI'm worried about it too. I think it could maybe be made to work,\nbut it seems fragile.\n\n> I may be able to provide another implementation on trial and it\n> may be easier than only objecting to your proposal.\n\nIf you have a better idea, let's hear it!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Feb 2000 21:30:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> > > Wow, that almost seems to easy to be true. I never thought that having\n> > > tuples of different structures in the table at the same time would be\n> > > possible. If so then I don't see a reason why this would be too hard to\n> > > do.\n> >\n> > If the transaction is not committed, I don't think anything actually\n> > reads the tuple columns, so you are safe.\n> >\n> \n> Hmm,tuples of multiple version in a table ?\n> This is neither clean nor easy for me.\n> There's no such stuff which takes the case into account,AFAIK.\n> \n> Seems no one but me object to it. I'm tired of this issue and it's\n> painful for me to continue discussion further in my poor English.\n> I may be able to provide another implementation on trial and it\n> may be easier than only objecting to your proposal.\n> Is it OK ?\n\nSure, whatever you want. No one is going to start coding anything for a\nwhile. Seemed like a clean solution with no rename() problems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 27 Feb 2000 21:34:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Hmm,tuples of multiple version in a table ?\n> > This is neither clean nor easy for me.\n> \n> I'm worried about it too. I think it could maybe be made to work,\n> but it seems fragile.\n> \n> > I may be able to provide another implementation on trial and it\n> > may be easier than only objecting to your proposal.\n> \n> If you have a better idea, let's hear it!\n>\n\nI don't want a final implementation this time.\nWhat I want is to provide a quick hack for both others and me\nto judge whether this direction is good or not.\n\nMy idea is essentially an invisible column implementation.\nDROP COLUMN would change the target pg_attribute tuple\nas follows..\n\t\n\tattnum -> an offset - attnum;\n\tatttypid -> 0\n\nWe would be able to see where to change by tracking error/\ncrashes caused by this change.\n\nI would also change attname to '*already dropped %d' for\nexamle to avoid duplicate attname. \n \nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Mon, 28 Feb 2000 12:21:36 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Mon, 28 Feb 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> > \n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Hmm,tuples of multiple version in a table ?\n> > > This is neither clean nor easy for me.\n> > \n> > I'm worried about it too. I think it could maybe be made to work,\n> > but it seems fragile.\n> > \n> > > I may be able to provide another implementation on trial and it\n> > > may be easier than only objecting to your proposal.\n> > \n> > If you have a better idea, let's hear it!\n> >\n> \n> I don't want a final implementation this time.\n> What I want is to provide a quick hack for both others and me\n> to judge whether this direction is good or not.\n> \n> My idea is essentially an invisible column implementation.\n> DROP COLUMN would change the target pg_attribute tuple\n> as follows..\n> \t\n> \tattnum -> an offset - attnum;\n> \tatttypid -> 0\n> \n> We would be able to see where to change by tracking error/\n> crashes caused by this change.\n> \n> I would also change attname to '*already dropped %d' for\n> examle to avoid duplicate attname. \n\nOkay, just curious here, but ... what you are proposing *sounds* to me\nlike half-way to what started this thread. (*Please* correct me if I'm\nwrong) ...\n\nEssentially, in your proposal, when you drop a column, all subsequent\ntuples inserted/updated would have ... that one column missing? So,\ninstead of doing a massive sweep through the table and removing that\ncolumn, only do it when an insert/update happens? \n\nBasically, eliminate the requirement to re-write every tuples, only those\nthat have activity?\n\n\n",
"msg_date": "Sun, 27 Feb 2000 23:40:11 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> > I would also change attname to '*already dropped %d' for\n> > examle to avoid duplicate attname. \n> \n> Okay, just curious here, but ... what you are proposing *sounds* to me\n> like half-way to what started this thread. (*Please* correct me if I'm\n> wrong) ...\n> \n> Essentially, in your proposal, when you drop a column, all subsequent\n> tuples inserted/updated would have ... that one column missing? So,\n> instead of doing a massive sweep through the table and removing that\n> column, only do it when an insert/update happens? \n> \n> Basically, eliminate the requirement to re-write every tuples, only those\n> that have activity?\n\nAnd I think the problem was that there was too much code to modify to\nallow this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 27 Feb 2000 22:52:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n> \n> On Mon, 28 Feb 2000, Hiroshi Inoue wrote:\n> \n> > > -----Original Message-----\n> > > From: Tom Lane [mailto:[email protected]]\n> > > \n> > > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > > Hmm,tuples of multiple version in a table ?\n> > > > This is neither clean nor easy for me.\n> > > \n> > > I'm worried about it too. I think it could maybe be made to work,\n> > > but it seems fragile.\n> > > \n> > > > I may be able to provide another implementation on trial and it\n> > > > may be easier than only objecting to your proposal.\n> > > \n> > > If you have a better idea, let's hear it!\n> > >\n> >\n\n[snip]\n \n> \n> Okay, just curious here, but ... what you are proposing *sounds* to me\n> like half-way to what started this thread. (*Please* correct me if I'm\n> wrong) ...\n>\n\nMy proposal is essentially same as what I proposed once in this thread.\nI don't think DROP COLUMN feature is very important.\nDROP/ADD CONSTRAINT feature seems much more important. \nWhy do you want a heavy iplementation like vacuum after 2x disk\nusage for this feature ?\nMy implementation won't touch the target table at all and would never\nremove dropped columns practically. It would only make them invisible\nand NULL would be set for newly insert/updated columns.\n\nIf you want a really clean table for DROP TABLE command,my\nproposal is useless.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n",
"msg_date": "Mon, 28 Feb 2000 13:25:06 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "At 12:21 PM 2/28/00 +0900, Hiroshi Inoue wrote:\n\n>My idea is essentially an invisible column implementation.\n>DROP COLUMN would change the target pg_attribute tuple\n>as follows..\n\nI don't see such a solution as being mutually exclusive with\nthe other one on the table.\n\nRemember ... Oracle provides both. I suspect that they did so\nbecause they were under customer pressure to provide a \"real\"\ncolumn drop and a \"fast\" (and non-2x tablesize!) solution. So\nthey did both. Also keep in mind that being able to drop a\ncolumn in Oracle is a year 1999 feature ... and both are provided.\nMore evidence of pressure from two points of view.\n\nOf course, PG suffers because the \"real\" column drop is a 2x\nspace solution, so the \"invisibility\" approach may more frequently\nbe desired.\n\nStill... as time goes on and PG gets adopted by more and more \nserious, large-scale users (which we all are working towards, \nright?) I suspect that each camp will want to be served.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 27 Feb 2000 21:26:30 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "At 11:40 PM 2/27/00 -0400, The Hermit Hacker wrote:\n\n>Okay, just curious here, but ... what you are proposing *sounds* to me\n>like half-way to what started this thread. (*Please* correct me if I'm\n>wrong) ...\n>\n>Essentially, in your proposal, when you drop a column, all subsequent\n>tuples inserted/updated would have ... that one column missing? So,\n>instead of doing a massive sweep through the table and removing that\n>column, only do it when an insert/update happens? \n>\n>Basically, eliminate the requirement to re-write every tuples, only those\n>that have activity?\n\nYes, this was one of the ideas that cropped up in previous discussion.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sun, 27 Feb 2000 21:30:06 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> \n> > > I would also change attname to '*already dropped %d' for\n> > > examle to avoid duplicate attname. \n> > \n> > Okay, just curious here, but ... what you are proposing *sounds* to me\n> > like half-way to what started this thread. (*Please* correct me if I'm\n> > wrong) ...\n> > \n> > Essentially, in your proposal, when you drop a column, all subsequent\n> > tuples inserted/updated would have ... that one column missing? So,\n> > instead of doing a massive sweep through the table and removing that\n> > column, only do it when an insert/update happens? \n> > \n> > Basically, eliminate the requirement to re-write every tuples, \n> only those\n> > that have activity?\n> \n> And I think the problem was that there was too much code to modify to\n> allow this.\n>\n\nSeems my trial would be useless.\nI would give up the trial.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 28 Feb 2000 16:16:43 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> > > Wow, that almost seems to easy to be true. I never thought that having\n> > > tuples of different structures in the table at the same time would be\n> > > possible. If so then I don't see a reason why this would be too hard to\n> > > do.\n> >\n> > If the transaction is not committed, I don't think anything actually\n> > reads the tuple columns, so you are safe.\n> >\n>\n> Hmm,tuples of multiple version in a table ?\n> This is neither clean nor easy for me.\n> There's no such stuff which takes the case into account,AFAIK.\n>\n> Seems no one but me object to it. I'm tired of this issue and it's\n> painful for me to continue discussion further in my poor English.\n> I may be able to provide another implementation on trial and it\n> may be easier than only objecting to your proposal.\n> Is it OK ?\n\n Consider me on your side.\n\n For some good reasons, I added a\n\n ReferentialIntegritySnapshotOverride\n\n mode, that causes any tuple to be visible when fetched by\n CTID. Actually, there will be at least a read lock on them,\n so locking will prevent damage. But I can think of other\n situations where this kind of \"read whatever I want you to\"\n could be needed and would fail then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 28 Feb 2000 08:56:22 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 12:21 PM 2/28/00 +0900, Hiroshi Inoue wrote:\n> \n> >My idea is essentially an invisible column implementation.\n> >DROP COLUMN would change the target pg_attribute tuple\n> >as follows..\n> \n> I don't see such a solution as being mutually exclusive with\n> the other one on the table.\n\nVery true, and we will need the hidden columns feature for a clean \nimplementation of inheritance anyway.\n\n> Remember ... Oracle provides both. I suspect that they did so\n> because they were under customer pressure to provide a \"real\"\n> column drop and a \"fast\" (and non-2x tablesize!) solution. So\n> they did both. Also keep in mind that being able to drop a\n> column in Oracle is a year 1999 feature ... and both are provided.\n> More evidence of pressure from two points of view.\n> \n> Of course, PG suffers because the \"real\" column drop is a 2x\n> space solution, so the \"invisibility\" approach may more frequently\n> be desired.\n\n\"update t set id=id+1\" is also a 2x space, likely even more if \nreferential inheritance is used (and checked at the end of transaction)\n\nAnd my main use of DROP COLUMN will probably be during development, \nusually meaning small table sizes.\n\n------------\nHannu\n",
"msg_date": "Mon, 28 Feb 2000 11:45:52 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 28 Feb 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > \n> > > > I would also change attname to '*already dropped %d' for\n> > > > examle to avoid duplicate attname. \n> > > \n> > > Okay, just curious here, but ... what you are proposing *sounds* to me\n> > > like half-way to what started this thread. (*Please* correct me if I'm\n> > > wrong) ...\n> > > \n> > > Essentially, in your proposal, when you drop a column, all subsequent\n> > > tuples inserted/updated would have ... that one column missing? So,\n> > > instead of doing a massive sweep through the table and removing that\n> > > column, only do it when an insert/update happens? \n> > > \n> > > Basically, eliminate the requirement to re-write every tuples, \n> > only those\n> > > that have activity?\n> > \n> > And I think the problem was that there was too much code to modify to\n> > allow this.\n> >\n> \n> Seems my trial would be useless.\n> I would give up the trial.\n\nHiroshi ...\n\t\n\tBruce's comment was just an observation ... if it can be done\ncleanly, I would love to see a version that didn't involve 2x the disk\nspace ... I don't believe that a trial would be useless, I think that\nBruce's only concern/warning is that the amount of code modifications that\nwould have to be made in order to accomplish this *might* be larger then\nthe benefit resulting in doing it this way.\n\n\tIf you feel that this can be done more efficiently, *please*\nproceed with the trial ... \n\n\tI'm curious about one thing ... several ppl have mentioned that\nOracle does it \"both ways\" ... does anyone know the syntax they use so\nthat someone can do it one way or another?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 28 Feb 2000 09:17:59 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 11:45 AM 2/28/00 +0200, Hannu Krosing wrote:\n\n>\"update t set id=id+1\" is also a 2x space,\n\nAnd PG doesn't do it correctly anyway...\n\n> likely even more if \n>referential inheritance is used (and checked at the end of transaction)\n\nThe triggers are all queued so yes, take memory too. Even better,\nif \"MATCH <unspecified>\" or especially \"MATCH PARTIAL\" is used with\nmulti-column foreign keys containing nulls, it will be impressively slow!\nWe can call these the built-in coffee break feature when used on large\ntables.\n\n(it's inherently slow, not just slow because of the PG implementation)\n\n>And my main use of DROP COLUMN will probably be during development, \n>usually meaning small table sizes.\n\nWell, folks who use the web toolkit I've been porting for Oracle will\nhave a use for it, too, because the toolkit has been rapidly evolving\n(ArsDigita has about 70 employees at the moment, most of them programmers\nworking on the Oracle-based version of the toolkit). ArsDigita provides\nupgrade .sql files for each version that consist in part of ADD/DROP\ncolumn statements so users can upgrade in place, a very useful thing.\n\nIt doesn't need to be fast in this context, just work. You tell the world\nyour site will be down for an evening on such-and-such date, stop \nlistening on port 80, and upgrade.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 Feb 2000 06:29:29 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> \"update t set id=id+1\" is also a 2x space,\n\n> And PG doesn't do it correctly anyway...\n\n? News to me. What's your definition of \"correctly\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Feb 2000 10:20:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "At 10:20 AM 2/28/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>>> \"update t set id=id+1\" is also a 2x space,\n>\n>> And PG doesn't do it correctly anyway...\n>\n>? News to me. What's your definition of \"correctly\"?\n\ncreate table foo(i integer unique);\n\n(insert values)\n\ndonb=# select * from foo;\n i \n---\n 1\n 2\n 3\n(3 rows)\n\ndonb=# update foo set i=i+1;\nERROR: Cannot insert a duplicate key into unique index foo_pkey\n\nShouldn't fail ... the constraint should be applied after the\nupdate, but the row-by-row update of the index causes it to fail.\nAt least I presume that this is an artifact of PG implementing the\nunique constraint by creating a unique index.\n\nStephan Szabo pointed this out to me awhile ago when we were\ndiscussing \"alter table add constraint\" (he was looking into\nthis when he worked on \"alter table add foreign key\").\n\nOf course, sometimes PG gets it right. I deleted stuff in foo,\nthen did:\n\ndonb=# insert into foo values(3);\nINSERT 26907 1\ndonb=# insert into foo values(2);\nINSERT 26908 1\ndonb=# insert into foo values(1);\nINSERT 26909 1\ndonb=# update foo set i=i+1;\nUPDATE 3\ndonb=# \n\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 Feb 2000 07:38:48 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 10:20 AM 2/28/00 -0500, Tom Lane wrote:\n> >Don Baccus <[email protected]> writes:\n> >>> \"update t set id=id+1\" is also a 2x space,\n> >\n> >> And PG doesn't do it correctly anyway...\n> >\n> >? News to me. What's your definition of \"correctly\"?\n> \n> create table foo(i integer unique);\n> \n> (insert values)\n> \n> donb=# select * from foo;\n> i\n> ---\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> donb=# update foo set i=i+1;\n> ERROR: Cannot insert a duplicate key into unique index foo_pkey\n\nI knew it used to misbehave that way, but at some point I got the \nimpression that it was fixed ;(\n\nIIRC, the same behaviour plagued the old foreign key implementation \nin contrib, which was why it was refused for a long time to be \nintegrated.\n\nI hope that at least the foreig keys don't do it anymore.\n\n---------\nHannu\n",
"msg_date": "Tue, 29 Feb 2000 02:04:56 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 02:04 AM 2/29/00 +0200, Hannu Krosing wrote:\n\n>IIRC, the same behaviour plagued the old foreign key implementation \n>in contrib, which was why it was refused for a long time to be \n>integrated.\n>\n>I hope that at least the foreig keys don't do it anymore.\n\nIt shouldn't because they're implemented via triggers after all the\nwork is done. In other words, the implementation might have bugs\nbut the bugs should be different :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 Feb 2000 16:36:07 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Don Baccus wrote:\n>\n> > donb=# update foo set i=i+1;\n> > ERROR: Cannot insert a duplicate key into unique index foo_pkey\n>\n> IIRC, the same behaviour plagued the old foreign key implementation\n> in contrib, which was why it was refused for a long time to be\n> integrated.\n>\n> I hope that at least the foreig keys don't do it anymore.\n\n ALL the FK triggers are delayed until after the entire\n statement (what's wrong for ON DELETE RESTRICT - but that's\n another story), or until the entire transaction (in deferred\n mode).\n\n But the UNIQUE constraint is still built upon unique nbtree\n indices, thus failing on primary key where such a unique\n index is automatically created for.\n\n I'm far too less familiar with our implementation of nbtree\n to tell whether it would be possible at all to delay unique\n checking until statement end or XACT commit. At least I\n assume it would require some similar technique of deferred\n queue.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Feb 2000 01:43:17 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 01:43 AM 2/29/00 +0100, Jan Wieck wrote:\n\n> ALL the FK triggers are delayed until after the entire\n> statement (what's wrong for ON DELETE RESTRICT - but that's\n> another story), or until the entire transaction (in deferred\n> mode).\n\nKind of wrong, just so folks understand the semantics are right in\nthe sense that the right answer is given (pass or fail) - you need\na stopwatch to know that we're not doing what the SQL3 suggests\nshould be done (catch the foreign key errors before changes are made\nand without incurring the cost of a rollback).\n\nThe current way we're doing it - identically to \"NO ACTION\" is\nfine for compatability purposes, though later we'd like to implement\na smart ON DELETE RESTRICT because the efficiency considerations\nthat led to its inclusion in SQL3 are reasonable ones.\n\n> I'm far too less familiar with our implementation of nbtree\n> to tell whether it would be possible at all to delay unique\n> checking until statement end or XACT commit. At least I\n> assume it would require some similar technique of deferred\n> queue.\n\nPresumably you'd queue up per-row triggers just like for FK constraints\nand insert into the unique index at that point.\n\nI have no idea how many other things this would break, if any.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 Feb 2000 17:20:50 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\n> ALL the FK triggers are delayed until after the entire\n> statement (what's wrong for ON DELETE RESTRICT - but that's\n> another story), or until the entire transaction (in deferred\n> mode).\n>\n> But the UNIQUE constraint is still built upon unique nbtree\n> indices, thus failing on primary key where such a unique\n> index is automatically created for.\n>\n> I'm far too less familiar with our implementation of nbtree\n> to tell whether it would be possible at all to delay unique\n> checking until statement end or XACT commit. At least I\n> assume it would require some similar technique of deferred\n> queue.\n\nWe might want to look at what we're doing for all of the constraints,\nbecause at some point we'll probably want to let you defer the other\nconstraints as well (I'm pretty sure this technically legal in SQL92).\nIf we can think of a good way to handle all of the constraints together\nthat might be worth doing to prevent us from coding the same thing\nmultiple times.\n\n",
"msg_date": "Mon, 28 Feb 2000 20:57:01 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "Don Baccus wrote:\n\n> At 01:43 AM 2/29/00 +0100, Jan Wieck wrote:\n>\n> > ALL the FK triggers are delayed until after the entire\n> > statement (what's wrong for ON DELETE RESTRICT - but that's\n> > another story), or until the entire transaction (in deferred\n> > mode).\n>\n> Kind of wrong, just so folks understand the semantics are right in\n> the sense that the right answer is given (pass or fail) - you need\n> a stopwatch to know ...\n\n Explanative version of \"that other story\". But not exactly\n correct IMHO. If following strictly SQL3 suggestions, an ON\n DELETE RESTRICT action cannot be deferrable at all. Even if\n the constraint itself is deferrable and is set explicitly to\n DEFERRED, the check should be done immediately at ROW level.\n That's the difference between \"NO ACTION\" and \"RESTRICT\".\n\n Actually, a RESTRICT violation can potentially bypass\n thousands of subsequent queries until COMMIT. Meaningless\n from the transactional PoV, but from the application\n programmers one (looking at the return code of a particular\n statement) it isn't!\n\n> > I'm far too less familiar with our implementation of nbtree\n> > to tell whether it would be possible at all to delay unique\n> > checking until statement end or XACT commit. At least I\n> > assume it would require some similar technique of deferred\n> > queue.\n>\n> Presumably you'd queue up per-row triggers just like for FK constraints\n> and insert into the unique index at that point.\n>\n> I have no idea how many other things this would break, if any.\n\n At least if deferring the index insert until XACT commit, any\n subsequent index scan wouldn't see inserted tuples, even if\n they MUST be visible.\n\n Maybe I'm less far away from knowledge than thought. Inside\n of a nbtree-index, any number of duplicates is accepted.\n It's the heap tuples visibility they point to, that triggers\n the dup message.\n\n So it's definitely some kind of \"accept duplicates for now\n but check for final dup's on this key later\".\n\n But that requires another index scan later. We can remember\n the relations and indices Oid (to get back the relation and\n index in question) plus the CTID of the added\n (inserted/updated tuple) to get back the key values\n (remembering the key itself could blow up memory). Then do an\n index scan under current (statement end/XACT commit)\n visibility to check if more than one HeapTupleSatisfies().\n\n It'll be expensive, compared to current UNIQUE implementation\n doing it on the fly during btree insert (doesn't it?). But\n the only way I see.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Feb 2000 03:24:43 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 03:24 AM 2/29/00 +0100, Jan Wieck wrote:\n\n> Explanative version of \"that other story\". But not exactly\n> correct IMHO. If following strictly SQL3 suggestions, an ON\n> DELETE RESTRICT action cannot be deferrable at all. Even if\n> the constraint itself is deferrable and is set explicitly to\n> DEFERRED, the check should be done immediately at ROW level.\n> That's the difference between \"NO ACTION\" and \"RESTRICT\".\n>\n> Actually, a RESTRICT violation can potentially bypass\n> thousands of subsequent queries until COMMIT. Meaningless\n> from the transactional PoV, but from the application\n> programmers one (looking at the return code of a particular\n> statement) it isn't!\n\nNo, strictly speaking it isn't correct. But without a stopwatch,\nit will be hard to tell.\n\nActually, though, since exceptions are only supposed to reject\nthe given SQL-statement and not trigger a PG-style auto-rollback of\nthe transaction, a subsequent \"commit\" should commit that subsequent\nwork (unless they in turn trigger constraint errors due to dependencies\non the first failed constraint).\n\nSo you don't really get to skip all those subsequent statements unless\nyou're looking for the exception, catch it, and do an explicit rollback.\n\nNone of that is in place in PG anyway at the moment...\n\nI'm assuming that the exception raised for an FK violation is the\nsame as an exception raised for numeric overflow, etc - I think \nyou missed that earlier discussion.\n\nThe fact that PG's auto-rollback is wrong was news to me, though\nobvious in hindsight, and I've not gone back to study RI semantics\nin light of this new information.\n\nSo I may be wrong, here. \n\nWe could always take out \"RESTRICT\" and claim SQL92 rather than SQL3\nreferential integrity :) :)\n\nGiven that Oracle only implements \"MATCH <unspecified>\" (as of 8.1.5,\nanyway), we're not doing too bad!\n\n>\n>> > I'm far too less familiar with our implementation of nbtree\n>> > to tell whether it would be possible at all to delay unique\n>> > checking until statement end or XACT commit. At least I\n>> > assume it would require some similar technique of deferred\n>> > queue.\n>>\n>> Presumably you'd queue up per-row triggers just like for FK constraints\n>> and insert into the unique index at that point.\n>>\n>> I have no idea how many other things this would break, if any.\n>\n> At least if deferring the index insert until XACT commit, any\n> subsequent index scan wouldn't see inserted tuples, even if\n> they MUST be visible.\n\nUgh, of course :(\n\n> Maybe I'm less far away from knowledge than thought. Inside\n> of a nbtree-index, any number of duplicates is accepted.\n> It's the heap tuples visibility they point to, that triggers\n> the dup message.\n>\n> So it's definitely some kind of \"accept duplicates for now\n> but check for final dup's on this key later\".\n>\n> But that requires another index scan later. We can remember\n> the relations and indices Oid (to get back the relation and\n> index in question) plus the CTID of the added\n> (inserted/updated tuple) to get back the key values\n> (remembering the key itself could blow up memory). Then do an\n> index scan under current (statement end/XACT commit)\n> visibility to check if more than one HeapTupleSatisfies().\n>\n> It'll be expensive, compared to current UNIQUE implementation\n> doing it on the fly during btree insert (doesn't it?). But\n> the only way I see.\n\nThe more I learn about SQL92 the more I understand why RDBMS systems\nhave the reputation for being piggy. But, the standard semantics\nof UPDATE on a column with a UNIQUE constraint are certainly consistent\nwith the paradigm that queries operate on sets of tuples, not sequences\nof tuples.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 Feb 2000 18:56:14 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n>\n\n[snip]\n\n> Hiroshi ...\n>\n> \tBruce's comment was just an observation ... if it can be done\n> cleanly, I would love to see a version that didn't involve 2x the disk\n> space ... I don't believe that a trial would be useless, I think that\n> Bruce's only concern/warning is that the amount of code modifications that\n> would have to be made in order to accomplish this *might* be larger then\n> the benefit resulting in doing it this way.\n>\n> \tIf you feel that this can be done more efficiently, *please*\n> proceed with the trial ...\n>\n\nOK,I may be able to provide a trial patch in a week or so if I'm lucky.\nHow to commit the patch ?\nWith #ifdef ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Tue, 29 Feb 2000 14:13:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> So it's definitely some kind of \"accept duplicates for now\n> but check for final dup's on this key later\".\n\n> But that requires another index scan later. We can remember\n> the relations and indices Oid (to get back the relation and\n> index in question) plus the CTID of the added\n> (inserted/updated tuple) to get back the key values\n> (remembering the key itself could blow up memory). Then do an\n> index scan under current (statement end/XACT commit)\n> visibility to check if more than one HeapTupleSatisfies().\n\n> It'll be expensive, compared to current UNIQUE implementation\n> doing it on the fly during btree insert (doesn't it?). But\n> the only way I see.\n\nHow about:\n\n1. During INSERT into unique index, notice whether any other index\nentries have same key. If so, add that key value to a queue of\npossibly-duplicate keys to check later.\n\n2. At commit, or whenever consistency should be checked, scan the\nqueue. For each entry, use the index to look up all the matching\ntuples, and check that only one will be valid if the transaction\ncommits.\n\nThis avoids a full index scan in the normal case, although it could\nbe pretty slow in the update-every-tuple scenario...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Feb 2000 00:40:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: The Hermit Hacker [mailto:[email protected]]\n> >\n> \n> [snip]\n> \n> > Hiroshi ...\n> >\n> > \tBruce's comment was just an observation ... if it can be done\n> > cleanly, I would love to see a version that didn't involve 2x the disk\n> > space ... I don't believe that a trial would be useless, I think that\n> > Bruce's only concern/warning is that the amount of code modifications that\n> > would have to be made in order to accomplish this *might* be larger then\n> > the benefit resulting in doing it this way.\n> >\n> > \tIf you feel that this can be done more efficiently, *please*\n> > proceed with the trial ...\n> >\n> \n> OK,I may be able to provide a trial patch in a week or so if I'm lucky.\n> How to commit the patch ?\n> With #ifdef ?\n\nNope, but it will have to wait until *after* 7.0 is released, so don't\npush yourself on it ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 29 Feb 2000 02:32:15 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: The Hermit Hacker [mailto:[email protected]]\n>\n> On Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n>\n> > > -----Original Message-----\n> > > From: The Hermit Hacker [mailto:[email protected]]\n> > >\n> >\n> > [snip]\n> >\n> > > Hiroshi ...\n> > >\n> > > \tBruce's comment was just an observation ... if it can be done\n> > > cleanly, I would love to see a version that didn't involve 2x the disk\n> > > space ... I don't believe that a trial would be useless, I think that\n> > > Bruce's only concern/warning is that the amount of code\n> modifications that\n> > > would have to be made in order to accomplish this *might* be\n> larger then\n> > > the benefit resulting in doing it this way.\n> > >\n> > > \tIf you feel that this can be done more efficiently, *please*\n> > > proceed with the trial ...\n> > >\n> >\n> > OK,I may be able to provide a trial patch in a week or so if I'm lucky.\n> > How to commit the patch ?\n> > With #ifdef ?\n>\n> Nope, but it will have to wait until *after* 7.0 is released, so don't\n> push yourself on it ...\n>\n\nHmm,until 7.0 release ?\nI don't want to keep my private branch so long.\nIs #ifdef bad to separate it from 7.0 release stuff ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 29 Feb 2000 16:06:29 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> Explanative version of \"that other story\". But not exactly\n> correct IMHO. If following strictly SQL3 suggestions, an ON\n> DELETE RESTRICT action cannot be deferrable at all. Even if\n> the constraint itself is deferrable and is set explicitly to\n> DEFERRED, the check should be done immediately at ROW level.\n> That's the difference between \"NO ACTION\" and \"RESTRICT\".\n> \n> Actually, a RESTRICT violation can potentially bypass\n> thousands of subsequent queries until COMMIT. Meaningless\n> from the transactional PoV, but from the application\n> programmers one (looking at the return code of a particular\n> statement) it isn't!\n...\n> It'll be expensive, compared to current UNIQUE implementation\n> doing it on the fly during btree insert (doesn't it?). But\n> the only way I see.\n\nSo currently we have ON UPDATE RESTRICT foreign keys :)\n\n-------------\nHannu\n",
"msg_date": "Tue, 29 Feb 2000 12:17:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Don Baccus wrote:\n\n> At 03:24 AM 2/29/00 +0100, Jan Wieck wrote:\n>\n> > Actually, a RESTRICT violation can potentially bypass\n> > thousands of subsequent queries until COMMIT. Meaningless\n> > from the transactional PoV, but from the application\n> > programmers one (looking at the return code of a particular\n> > statement) it isn't!\n>\n> No, strictly speaking it isn't correct. But without a stopwatch,\n> it will be hard to tell.\n\n It is easy to tell:\n\n CREATE TABLE t1 (a integer PRIMARY KEY);\n CREATE TABLE t2 (a integer REFERENCES t1\n ON DELETE RESTRICT\n DEFERRABLE);\n\n INSERT INTO t1 VALUES (1);\n INSERT INTO t1 VALUES (2);\n INSERT INTO t1 VALUES (3);\n\n INSERT INTO t2 VALUES (1);\n INSERT INTO t2 VALUES (2);\n\n BEGIN TRANSACTION;\n SET CONSTRAINTS ALL DEFERRED;\n DELETE FROM t1 WHERE a = 2;\n DELETE FROM t1 WHERE a = 3;\n COMMIT TRANSACTION;\n\n In this case, the first DELETE from t1 must already bomb the\n exception, setting the transaction block into error state and\n reject all further queries until COMMIT/ROLLBACK. The SET\n DEFERRED should only affect a check for key existance on\n INSERT to t2, not the RESTRICT action on DELETE to t1.\n\n The end result will be the same, both DELETEs get rolled\n back. But the application will see it at COMMIT, not at the\n first DELETE. So the system behaves exactly like for NO\n ACTION.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Feb 2000 11:22:27 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Jan Wieck wrote:\n> >\n> > Actually, a RESTRICT violation can potentially bypass\n> > thousands of subsequent queries until COMMIT. Meaningless\n> > from the transactional PoV, but from the application\n> > programmers one (looking at the return code of a particular\n> > statement) it isn't!\n> ...\n> > It'll be expensive, compared to current UNIQUE implementation\n> > doing it on the fly during btree insert (doesn't it?). But\n> > the only way I see.\n>\n> So currently we have ON UPDATE RESTRICT foreign keys :)\n\n For foreign keys we actually have ON UPDATE/DELETE NO ACTION\n (plus SET NULL and SET DEFAULT). Only the RESTRICT isn't\n fully SQL3. I just had an idea that might easily turn it to\n do the right thing.\n\n For the UNIQUE constraint, it's totally wrong (and not\n related to FOREIGN KEY stuff at all). The UNIQUE constraint\n isn't deferrable at all, and it looks for violations on a per\n row level, not on a per set level as it should.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Feb 2000 11:31:13 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Jan Wieck wrote:\n> >\n> > It'll be expensive, compared to current UNIQUE implementation\n> > doing it on the fly during btree insert (doesn't it?). But\n> > the only way I see.\n> \n> So currently we have foreign keys :)\n\nI meant of course ON UPDATE RESTRICT PRIMARY KEYS ..\n\n----------\nHannu\n",
"msg_date": "Tue, 29 Feb 2000 12:41:20 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "I wrote:\n\n> fully SQL3. I just had an idea that might easily turn it to\n> do the right thing.\n\n ON <event> RESTRICT triggers are now executed after the\n statement allways, ignoring any explicitly set deferred mode.\n\n This is pretty close to Date's SQL3 interpretation, or IMHO\n better. Date says that they are checked BEFORE each ROW, but\n that would ignore the SET character of a statement. Now we\n have correct semantics for all 4 possible referential\n actions.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 29 Feb 2000 13:22:19 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n\n> > Nope, but it will have to wait until *after* 7.0 is released, so don't\n> > push yourself on it ...\n> >\n> \n> Hmm,until 7.0 release ?\n> I don't want to keep my private branch so long.\n> Is #ifdef bad to separate it from 7.0 release stuff ?\n\nGo for it and submit a patch ... if its totally innoculous, then we can\ntry and plug her in, but I won't guarantee it. Since there should be no\nmajor changes between now and 7.0 release, we can store any patch until\nthe release also ...\n\n\n",
"msg_date": "Tue, 29 Feb 2000 09:17:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 11:22 AM 2/29/00 +0100, Jan Wieck wrote:\n>Don Baccus wrote:\n>\n>> At 03:24 AM 2/29/00 +0100, Jan Wieck wrote:\n>>\n>> > Actually, a RESTRICT violation can potentially bypass\n>> > thousands of subsequent queries until COMMIT. Meaningless\n>> > from the transactional PoV, but from the application\n>> > programmers one (looking at the return code of a particular\n>> > statement) it isn't!\n>>\n>> No, strictly speaking it isn't correct. But without a stopwatch,\n>> it will be hard to tell.\n>\n> It is easy to tell:\n>\n> CREATE TABLE t1 (a integer PRIMARY KEY);\n> CREATE TABLE t2 (a integer REFERENCES t1\n> ON DELETE RESTRICT\n> DEFERRABLE);\n>\n> INSERT INTO t1 VALUES (1);\n> INSERT INTO t1 VALUES (2);\n> INSERT INTO t1 VALUES (3);\n>\n> INSERT INTO t2 VALUES (1);\n> INSERT INTO t2 VALUES (2);\n>\n> BEGIN TRANSACTION;\n> SET CONSTRAINTS ALL DEFERRED;\n> DELETE FROM t1 WHERE a = 2;\n> DELETE FROM t1 WHERE a = 3;\n> COMMIT TRANSACTION;\n>\n> In this case, the first DELETE from t1 must already bomb the\n> exception, setting the transaction block into error state and\n> reject all further queries until COMMIT/ROLLBACK.\n\nAhhh...but the point you're missing, which was brought up a few\ndays ago, is that this PG-ism of rejecting all further queries\nuntil COMMIT/ROLLBACK is in itself NONSTANDARD.\n\nAs far as the effect of DEFERRED on RESTRICT with STANDARD, not\nPG, transaction semantics I've not investigated it. Neither one\nof us has a particularly great record at correctly interpreting\nthe SQL3 standard regarding the subtleties of foreign key semantics,\nsince we both had differing interpretations of RESTRICT/NO ACTION\nand (harumph) we were BOTH wrong :) Date implies that there's\nno difference other than RESTRICT's returning an error more quickly,\nbut he doesn't talk about the DEFERRED case.\n\nAnyway, it's moot at the moment since neither RESTRICT nor standard\nSQL92 transaction semantics are implemented.\n\n> The end result will be the same,\n\nWhich is what I mean when I say you pretty much need a stopwatch\nto tell the difference - OK, in PG you can look at the non-standard\nerror messages due to the non-standard rejection of subsequent\nqueries, but I was thinking in terms of standard transaction\nsemantics.\n\n> both DELETEs get rolled\n> back. But the application will see it at COMMIT, not at the\n> first DELETE. So the system behaves exactly like for NO\n> ACTION.\n\nYes.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 29 Feb 2000 06:51:10 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 01:22 PM 2/29/00 +0100, Jan Wieck wrote:\n>I wrote:\n>\n>> fully SQL3. I just had an idea that might easily turn it to\n>> do the right thing.\n>\n> ON <event> RESTRICT triggers are now executed after the\n> statement allways, ignoring any explicitly set deferred mode.\n> This is pretty close to Date's SQL3 interpretation, or IMHO\n> better. Date says that they are checked BEFORE each ROW, but\n> that would ignore the SET character of a statement.\n\nPerhaps that's actually the point of RESTRICT? Sacrifice the \nset character of a statement in this special case in order to\nreturn an error quickly?\n\nSince RESTRICT wasn't in SQL92, and since it's very close to\nNO ACTION, it reeks of being an efficiency hack. \n\nI dread digging into that part of the standard again...this is\na case where the various proposals and justifications that were\nbefore the committee at the time would be useful since the \nactual words that made it to the standard are opaque.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 29 Feb 2000 06:56:04 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Seems we have 4 DROP COLUMN ideas:\n\n\tMethod Advantage\n\t-----------------------------------------------------------------\n1\tinvisible column marked by negative attnum\t\tfast\n2\tinvisible column marked by is_dropped column\t\tfast\n3\tmake copy of table without column\t\t\tcol removed\n4\tmake new tuples in existing table without column\tcol removed\n\nFolks, we had better choose one and get started. \n\nNumber 1 Hiroshi has ifdef'ed out in the code. Items 1 and 2 have\nproblems with backend code and 3rd party code not seeing the dropped\ncolumns, or having gaps in the attno numbering. Number 3 has problems\nwith making it an atomic operation, and number 4 is described below. \n\n---------------------------------------------------------------------------\n\n> Bruce Momjian <[email protected]> writes:\n> > You can exclusively lock the table, then do a heap_getnext() scan over\n> > the entire table, remove the dropped column, do a heap_insert(), then a\n> > heap_delete() on the current tuple, making sure to skip over the tuples\n> > inserted by the current transaction. When completed, remove the column\n> > from pg_attribute, mark the transaction as committed (if desired), and\n> > run vacuum over the table to remove the deleted rows.\n> \n> Hmm, that would work --- the new tuples commit at the same instant that\n> the schema updates commit, so it should be correct. You have the 2x\n> disk usage problem, but there's no way around that without losing\n> rollback ability.\n> \n> A potentially tricky bit will be persuading the tuple-reading and tuple-\n> writing subroutines to pay attention to different versions of the tuple\n> structure for the same table. I haven't looked to see if this will be\n> difficult or not. If you can pass the TupleDesc explicitly then it\n> shouldn't be a problem.\n> \n> I'd suggest that the cleanup vacuum *not* be an automatic part of\n> the operation; just recommend that people do it ASAP after dropping\n> a column. Consider needing to drop several columns...\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Jun 2000 08:49:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Seems we have 4 DROP COLUMN ideas:\n> \n> Method Advantage\n> -----------------------------------------------------------------\n> 1 invisible column marked by negative attnum fast\n> 2 invisible column marked by is_dropped column fast\n> 3 make copy of table without column col removed\n> 4 make new tuples in existing table without column col removed\n\nIIRC there was a fifth idea, a variation of 2 that would work better\nwith \ninheritance -\n\n5 all columns have is_real_column attribute that is true for all\ncoluns \npresent in that relation, so situations like\n\ncreate table tab_a(a_i int);\ncreate table tab_b(b_i int) inherits(tab_a);\nalter table tab_a add column c_i int;\n\ncan be made to work.\n\nIt would also require clients to ignore all missing columns that backend\ncan \npass to them as nulls (which is usually quite cheap in bandwith usage)\nin \ncase of \"SELECT **\" queries.\n\nWe could even rename attno to attid to make folks aware that it is not\nbe \nassumed to be continuous.\n\n> Folks, we had better choose one and get started.\n> \n> Number 1 Hiroshi has ifdef'ed out in the code. Items 1 and 2 have\n> problems with backend code and 3rd party code not seeing the dropped\n> columns, or having gaps in the attno numbering.\n\nIf we want to make ADD COLUMN to work with inheritance wihout having to \nrewrite every single tuple in both parent and inherited tables, we will \nhave to accept the fact that there are caps in in attno numbering.\n\n> Number 3 has problems\n> with making it an atomic operation, and number 4 is described below.\n\nNr 4 has still problems with attno numbering _changing_ during drop\nwhich \ncould either be better or worse for client software than having gaps -\nin both cases client must be prepared to deal with runtime changes in \nattribute definition.\n\n--------------\nHannu\n",
"msg_date": "Sat, 10 Jun 2000 06:59:33 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> Seems we have 4 DROP COLUMN ideas:\n> \n> \tMethod Advantage\n> \t-----------------------------------------------------------------\n> 1\tinvisible column marked by negative attnum\t\tfast\n> 2\tinvisible column marked by is_dropped column\t\tfast\n> 3\tmake copy of table without column\t\t\tcol removed\n> 4\tmake new tuples in existing table without column\tcol removed\n> \n> Folks, we had better choose one and get started. \n> \n> Number 1 Hiroshi has ifdef'ed out in the code. Items 1 and 2 have\n> problems with backend code and 3rd party code not seeing the dropped\n> columns,\n\nHmm,doesn't *not seeing* mean the column is dropped ?\n\n> or having gaps in the attno numbering. Number 3 has problems\n> with making it an atomic operation, and number 4 is described below. \n>\n\nDon't forget another important point.\n\nCurrently even DROP TABLE doesn't remove related objects completely.\nAnd I don't think I could remove objects related to the dropping column\ncompletely using 1)2) in ALTER TABLE DROP COLUMN implementation.\n\nUsing 3)4) we should not only remove objects as 1)2) but also\nchange attnum-s in all objects related to the relation. Otherwise\nPostgreSQL would do the wrong thing silently.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Sat, 10 Jun 2000 13:43:26 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 01:43 PM 6/10/00 +0900, Hiroshi Inoue wrote:\n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]]On Behalf Of Bruce Momjian\n>> \n>> Seems we have 4 DROP COLUMN ideas:\n>> \n>> \tMethod Advantage\n>> \t-----------------------------------------------------------------\n>> 1\tinvisible column marked by negative attnum\t\tfast\n>> 2\tinvisible column marked by is_dropped column\t\tfast\n>> 3\tmake copy of table without column\t\t\tcol removed\n>> 4\tmake new tuples in existing table without column\tcol removed\n>> \n>> Folks, we had better choose one and get started. \n\nOracle gives you the choice between the \"cheating\" fast method(s) and\nthe \"real\" slow (really slow?) real method.\n\nSo there's at least real world experience by virtue of example by\nthe world's most successful database supplier that user control\nover \"hide the column\" and \"really delete the column\" is valuable.\n\nIt really makes a lot of sense to give such a choice. If one\ndoes so by \"hiding\", at a later date one would think the choice\nof \"really deleting\" would be a possibility. I don't know if\nOracle does this...\n\nIf not, they might not care. In today's world, there are bazillions\nof dollars for Oracle to scoop up from users who could just as easily\nbe PG users - all those \"we'll fail if don't IPO 'cause we'll never\nhave any customers\" database-backed websites :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 09 Jun 2000 21:57:58 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> Oracle gives you the choice between the \"cheating\" fast method(s) and\n> the \"real\" slow (really slow?) real method.\n\n> So there's at least real world experience by virtue of example by\n> the world's most successful database supplier that user control\n> over \"hide the column\" and \"really delete the column\" is valuable.\n\nSure, but you don't need any help from the database to do \"really delete\nthe column\". SELECT INTO... is enough, and it's not even any slower\nthan the implementations under discussion.\n\nSo I'm satisfied if we offer the \"hide the column\" approach.\n\nHas anyone thought about what happens to table constraints that use the\ndoomed column? Triggers, RI rules, yadda yadda?\n\nHas anyone thought about undoing a DELETE COLUMN? The data is still\nthere, at least in tuples that have not been updated, so it's not\ntotally unreasonable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Jun 2000 01:14:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "At 01:14 AM 6/10/00 -0400, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> Oracle gives you the choice between the \"cheating\" fast method(s) and\n>> the \"real\" slow (really slow?) real method.\n>\n>> So there's at least real world experience by virtue of example by\n>> the world's most successful database supplier that user control\n>> over \"hide the column\" and \"really delete the column\" is valuable.\n>\n>Sure, but you don't need any help from the database to do \"really delete\n>the column\". SELECT INTO... is enough, and it's not even any slower\n>than the implementations under discussion.\n>\n>So I'm satisfied if we offer the \"hide the column\" approach.\n\n<shrug> I wouldn't put a \"real\" drop column at the top of my list\nof priorities, but there is something to be said for user convenience.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Sat, 10 Jun 2000 05:43:06 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]]On Behalf Of Bruce Momjian\n> > \n> > Seems we have 4 DROP COLUMN ideas:\n> > \n> > \tMethod Advantage\n> > \t-----------------------------------------------------------------\n> > 1\tinvisible column marked by negative attnum\t\tfast\n> > 2\tinvisible column marked by is_dropped column\t\tfast\n> > 3\tmake copy of table without column\t\t\tcol removed\n> > 4\tmake new tuples in existing table without column\tcol removed\n> > \n> > Folks, we had better choose one and get started. \n> > \n> > Number 1 Hiroshi has ifdef'ed out in the code. Items 1 and 2 have\n> > problems with backend code and 3rd party code not seeing the dropped\n> > columns,\n> \n> Hmm,doesn't *not seeing* mean the column is dropped ?\n\nI meant problems of backend code and 3rd party code _seeing_ the dropped\ncolumn in pg_attribute.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 10 Jun 2000 12:15:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": ">> Seems we have 4 DROP COLUMN ideas:\n>> Method Advantage\n>> -----------------------------------------------------------------\n>> 1\tinvisible column marked by negative attnum\t\tfast\n>> 2\tinvisible column marked by is_dropped column\t\tfast\n>> 3\tmake copy of table without column\t\t\tcol removed\n>> 4\tmake new tuples in existing table without column\tcol removed\n\nBruce and I talked about this by phone yesterday, and we realized that\nnone of these are very satisfactory. #1 and #2 both have the flaw that\napplications that examine pg_attribute will probably break: they will\nsee a sequence of attnum values with gaps in it. And what should the\nrel's relnatts field be set to? #3 and #4 are better on that point,\nbut they leave us with the problem of renumbering references to columns\nafter the dropped one in constraints, rules, PL functions, etc.\n\nFurthermore, there is a closely related problem that none of these\napproaches give us much help on: recursive ALTER TABLE ADD COLUMN.\nRight now, ADD puts the new column at the end of each table it's added\nto, which often means that it gets a different column number in child\ntables than in parent tables. That leads to havoc for pg_dump.\n\nI think the only clean solution is to create a clear distinction between\nphysical and logical column numbers. Each pg_attribute tuple would need\ntwo attnum fields, and pg_class would need two relnatts fields as well.\nA column once created would never change its physical column number, but\nits logical column number might change as a consequence of adding or\ndropping columns before it. ADD COLUMN would ensure that a column added\nto child tables receives the same logical column number as it has in the\nparent table, thus solving the dump/reload problem. DROP COLUMN would\nassign an invalid logical column number to dropped columns. They could\nbe numbered zero except that we'd probably still want a unique index on\nattrelid+attnum, and the index would complain. I'd suggest using\nHiroshi's idea: give a dropped column a logical attnum equal to\n-(physical_attnum + offset).\n\nWith this approach, internal operations on tuples would all use\nphysical column numbers, but operations that interface to the outside\nworld would present a view of only the valid logical columns. For\nexample, the parser would only allow logical columns to be referenced\nby name; \"SELECT *\" would expand to valid logical columns in logical-\ncolumn-number order; COPY would send or receive valid logical columns\nin logical-column-number order; etc.\n\nStored rules and so forth probably should store physical column numbers\nso that they need not be modified during column add/drop.\n\nThis would require looking at all the places in the backend to determine\nwhether they should be working with logical or physical column numbers,\nbut the design is such that most all places would want to be using\nphysical numbers, so I don't think it'd be too painful.\n\nAlthough I'd prefer to give the replacement columns two new names\n(eg, \"attlnum\" and \"attpnum\") to ensure we find all uses, this would\nsurely break applications that examine pg_attribute. For compatibility\nwe'd have to recycle \"attnum\" and \"relnatts\" to indicate logical column\nnumber and logical column count, while adding new fields (say \"attpnum\"\nand \"relnpatts\") for the physical number and count.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Jun 2000 12:22:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> >> Seems we have 4 DROP COLUMN ideas:\n> >> Method Advantage\n> >> -----------------------------------------------------------------\n> >> 1\tinvisible column marked by negative attnum\t\tfast\n> >> 2\tinvisible column marked by is_dropped column\t\tfast\n> >> 3\tmake copy of table without column\t\t\tcol removed\n> >> 4\tmake new tuples in existing table without column\tcol removed\n> \n> Bruce and I talked about this by phone yesterday, and we realized that\n> none of these are very satisfactory. #1 and #2 both have the flaw that\n> applications that examine pg_attribute will probably break: they will\n> see a sequence of attnum values with gaps in it. And what should the\n> rel's relnatts field be set to? #3 and #4 are better on that point,\n> but they leave us with the problem of renumbering references to columns\n> after the dropped one in constraints, rules, PL functions, etc.\n\nYes, glad you summarized.\n\n> \n> Furthermore, there is a closely related problem that none of these\n> approaches give us much help on: recursive ALTER TABLE ADD COLUMN.\n> Right now, ADD puts the new column at the end of each table it's added\n> to, which often means that it gets a different column number in child\n> tables than in parent tables. That leads to havoc for pg_dump.\n\nAlso good point.\n\n> \n> I think the only clean solution is to create a clear distinction between\n> physical and logical column numbers. Each pg_attribute tuple would need\n> two attnum fields, and pg_class would need two relnatts fields as well.\n\nExcellent idea.\n\n> A column once created would never change its physical column number, but\n> its logical column number might change as a consequence of adding or\n> dropping columns before it. ADD COLUMN would ensure that a column added\n> to child tables receives the same logical column number as it has in the\n> parent table, thus solving the dump/reload problem. DROP COLUMN would\n> assign an invalid logical column number to dropped columns. They could\n> be numbered zero except that we'd probably still want a unique index on\n> attrelid+attnum, and the index would complain. I'd suggest using\n> Hiroshi's idea: give a dropped column a logical attnum equal to\n> -(physical_attnum + offset).\n\nMy guess is that we would need a unique index on the physical attno, not\nthe logical one. Multiple zero attno's may be fine.\n\n> \n> With this approach, internal operations on tuples would all use\n> physical column numbers, but operations that interface to the outside\n> world would present a view of only the valid logical columns. For\n> example, the parser would only allow logical columns to be referenced\n> by name; \"SELECT *\" would expand to valid logical columns in logical-\n> column-number order; COPY would send or receive valid logical columns\n> in logical-column-number order; etc.\n\nYes, the only hard part will be taking values supplied in logical order\nand moving them into pysical order. Not too hard with dropped columns,\nbecause they are only gaps, but inheritance would require re-ordering\nsome of the values supplied by the user. Not hard, just something\nadditional that is needed.\n\n> \n> Stored rules and so forth probably should store physical column numbers\n> so that they need not be modified during column add/drop.\n\nYes!\n\n> \n> This would require looking at all the places in the backend to determine\n> whether they should be working with logical or physical column numbers,\n> but the design is such that most all places would want to be using\n> physical numbers, so I don't think it'd be too painful.\n\nAgreed. Most are physical.\n\n> \n> Although I'd prefer to give the replacement columns two new names\n> (eg, \"attlnum\" and \"attpnum\") to ensure we find all uses, this would\n> surely break applications that examine pg_attribute. For compatibility\n> we'd have to recycle \"attnum\" and \"relnatts\" to indicate logical column\n> number and logical column count, while adding new fields (say \"attpnum\"\n> and \"relnpatts\") for the physical number and count.\n\nCan I recommend keeping attnum and relatts as logical, and adding\nattheapnum and relheapatts so that it clearly shows these are the heap\nvalues, not the user values.\n\nGreat idea. I was seeing things blocked in every option until your\nidea.\n\nAlso, my guess is that Hiroshi's #ifdef's mark the places we need to\nstart looking at.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Jun 2000 21:00:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> >> Seems we have 4 DROP COLUMN ideas:\n> >> Method Advantage\n> >> -----------------------------------------------------------------\n> >> 1\tinvisible column marked by negative attnum\t\tfast\n> >> 2\tinvisible column marked by is_dropped column\t\tfast\n> >> 3\tmake copy of table without column\t\t\tcol removed\n> >> 4\tmake new tuples in existing table without column\tcol removed\n>\n\nHmm,I've received no pg-ML mails for more than 1 day.\nWhat's happened with pgsql ML ? \n \n> Bruce and I talked about this by phone yesterday, and we realized that\n> none of these are very satisfactory. #1 and #2 both have the flaw that\n> applications that examine pg_attribute will probably break: they will\n> see a sequence of attnum values with gaps in it. And what should the\n> rel's relnatts field be set to? #3 and #4 are better on that point,\n> but they leave us with the problem of renumbering references to columns\n> after the dropped one in constraints, rules, PL functions, etc.\n> \n> Furthermore, there is a closely related problem that none of these\n> approaches give us much help on: recursive ALTER TABLE ADD COLUMN.\n> Right now, ADD puts the new column at the end of each table it's added\n> to, which often means that it gets a different column number in child\n> tables than in parent tables. That leads to havoc for pg_dump.\n>\n\nInheritance is one of the reason why I didn't take #2. I don't understand \nmarking is_dropped is needed or not when pg_attribute is overhauled\nfor inheritance.\nI myself have never wanted to use current inheritance functionality\nmainly because of this big flaw. Judging from the recent discussion\nabout oo(though I don't understand details),the change seems to be\nneeded in order to make inheritance functionality really useful. \n \n> I think the only clean solution is to create a clear distinction between\n> physical and logical column numbers. Each pg_attribute tuple would need\n> two attnum fields, and pg_class would need two relnatts fields as well.\n> A column once created would never change its physical column number, but\n\nI don't understand inheritance well. In the near future wouldn't the\nimplementation require e.g. attid which is common to all children\nof a parent and is never changed ? If so,we would need the third \nattid field which is irrevalent to physical/logical position. If not,\nphysical column number would be sufficient . \n \n> its logical column number might change as a consequence of adding or\n> dropping columns before it. ADD COLUMN would ensure that a column added\n> to child tables receives the same logical column number as it has in the\n> parent table, thus solving the dump/reload problem. DROP COLUMN would\n> assign an invalid logical column number to dropped columns. They could\n> be numbered zero except that we'd probably still want a unique index on\n> attrelid+attnum, and the index would complain. I'd suggest using\n> Hiroshi's idea: give a dropped column a logical attnum equal to\n> -(physical_attnum + offset).\n> \n> With this approach, internal operations on tuples would all use\n> physical column numbers, but operations that interface to the outside\n> world would present a view of only the valid logical columns. For\n> example, the parser would only allow logical columns to be referenced\n> by name; \"SELECT *\" would expand to valid logical columns in logical-\n> column-number order; COPY would send or receive valid logical columns\n> in logical-column-number order; etc.\n> \n> Stored rules and so forth probably should store physical column numbers\n> so that they need not be modified during column add/drop.\n> \n> This would require looking at all the places in the backend to determine\n> whether they should be working with logical or physical column numbers,\n> but the design is such that most all places would want to be using\n> physical numbers, so I don't think it'd be too painful.\n> \n> Although I'd prefer to give the replacement columns two new names\n> (eg, \"attlnum\" and \"attpnum\") to ensure we find all uses, this would\n> surely break applications that examine pg_attribute. For compatibility\n> we'd have to recycle \"attnum\" and \"relnatts\" to indicate logical column\n> number and logical column count, while adding new fields (say \"attpnum\"\n> and \"relnpatts\") for the physical number and count.\n>\n\nI agree with you that we would add attpnum and change the meaing of\nattnum as logical column number for backward compatibility.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 12 Jun 2000 10:40:47 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> > \n> > >> Seems we have 4 DROP COLUMN ideas:\n> > >> Method Advantage\n> > >> -----------------------------------------------------------------\n> > >> 1\tinvisible column marked by negative attnum\t\tfast\n> > >> 2\tinvisible column marked by is_dropped column\t\tfast\n> > >> 3\tmake copy of table without column\t\t\tcol removed\n> > >> 4\tmake new tuples in existing table without column\tcol removed\n> >\n> \n> Hmm,I've received no pg-ML mails for more than 1 day.\n> What's happened with pgsql ML ? \n\nTom says there are tons of messages in the hub.org mail queue, but they\nare not being delivered.\n\n> \n> > Bruce and I talked about this by phone yesterday, and we realized that\n> > none of these are very satisfactory. #1 and #2 both have the flaw that\n> > applications that examine pg_attribute will probably break: they will\n> > see a sequence of attnum values with gaps in it. And what should the\n> > rel's relnatts field be set to? #3 and #4 are better on that point,\n> > but they leave us with the problem of renumbering references to columns\n> > after the dropped one in constraints, rules, PL functions, etc.\n> > \n> > Furthermore, there is a closely related problem that none of these\n> > approaches give us much help on: recursive ALTER TABLE ADD COLUMN.\n> > Right now, ADD puts the new column at the end of each table it's added\n> > to, which often means that it gets a different column number in child\n> > tables than in parent tables. That leads to havoc for pg_dump.\n> >\n> \n> Inheritance is one of the reason why I didn't take #2. I don't understand \n> marking is_dropped is needed or not when pg_attribute is overhauled\n> for inheritance.\n> I myself have never wanted to use current inheritance functionality\n> mainly because of this big flaw. Judging from the recent discussion\n> about oo(though I don't understand details),the change seems to be\n> needed in order to make inheritance functionality really useful. \n\nWhat would happen is that all the logical attributes would be shifted\nover, and a new column added using ADD COLUMN would be put in its place.\nSeems it would work fine.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Jun 2000 21:58:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\nI don't know if this is one of the 5, but my idea of a good\nimplementation is to do the fast invisible approach, and then update\nindividual tuples to the new format the next time they happen to be\nUPDATEd.\n\nTherefore, ALTER TABLE DROP COLUMN, followed by UPDATE foo SET bar=bar;\nwould cause the equiv of (4).\n\n-- \nChris Bitmead\nmailto:[email protected]\nHannu Krosing wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > Seems we have 4 DROP COLUMN ideas:\n> >\n> > Method Advantage\n> > -----------------------------------------------------------------\n> > 1 invisible column marked by negative attnum fast\n> > 2 invisible column marked by is_dropped column fast\n> > 3 make copy of table without column col removed\n> > 4 make new tuples in existing table without column col removed\n> \n> IIRC there was a fifth idea, a variation of 2 that would work better\n> with\n> inheritance -\n> \n> 5 all columns have is_real_column attribute that is true for all\n> coluns\n> present in that relation, so situations like\n> \n> create table tab_a(a_i int);\n> create table tab_b(b_i int) inherits(tab_a);\n> alter table tab_a add column c_i int;\n> \n> can be made to work.\n> \n> It would also require clients to ignore all missing columns that backend\n> can\n> pass to them as nulls (which is usually quite cheap in bandwith usage)\n> in\n> case of \"SELECT **\" queries.\n> \n> We could even rename attno to attid to make folks aware that it is not\n> be\n> assumed to be continuous.\n> \n> > Folks, we had better choose one and get started.\n> >\n> > Number 1 Hiroshi has ifdef'ed out in the code. Items 1 and 2 have\n> > problems with backend code and 3rd party code not seeing the dropped\n> > columns, or having gaps in the attno numbering.\n> \n> If we want to make ADD COLUMN to work with inheritance wihout having to\n> rewrite every single tuple in both parent and inherited tables, we will\n> have to accept the fact that there are caps in in attno numbering.\n> \n> > Number 3 has problems\n> > with making it an atomic operation, and number 4 is described below.\n> \n> Nr 4 has still problems with attno numbering _changing_ during drop\n> which\n> could either be better or worse for client software than having gaps -\n> in both cases client must be prepared to deal with runtime changes in\n> attribute definition.\n> \n> --------------\n> Hannu\n",
"msg_date": "Mon, 12 Jun 2000 23:28:00 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> I don't know if this is one of the 5, but my idea of a good\n> implementation is to do the fast invisible approach, and then update\n> individual tuples to the new format the next time they happen to be\n> UPDATEd.\n\nHow would you tell whether a particular tuple has been updated or not?\n\nFurthermore, how would you remember the old tuple format (or formats)\nso that you'd know how to make the conversion?\n\nSeems to me this approach would require adding some sort of table\nversion number to every tuple header, plus storing a complete set of\nsystem catalog entries for every past version of each table's schema.\nThat's a heck of a high price, in both storage and complexity, for a\nfeature of dubious value...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Jun 2000 11:01:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Sun, 11 Jun 2000, Bruce Momjian wrote:\n\n> > > -----Original Message-----\n> > > From: Tom Lane [mailto:[email protected]]\n> > > \n> > > >> Seems we have 4 DROP COLUMN ideas:\n> > > >> Method Advantage\n> > > >> -----------------------------------------------------------------\n> > > >> 1\tinvisible column marked by negative attnum\t\tfast\n> > > >> 2\tinvisible column marked by is_dropped column\t\tfast\n> > > >> 3\tmake copy of table without column\t\t\tcol removed\n> > > >> 4\tmake new tuples in existing table without column\tcol removed\n> > >\n> > \n> > Hmm,I've received no pg-ML mails for more than 1 day.\n> > What's happened with pgsql ML ? \n> \n> Tom says there are tons of messages in the hub.org mail queue, but they\n> are not being delivered.\n\nI was out for the past 4 days taking a little girl camping for her b-day\n... great weekend, but we had a process run rampant over the weekend that\ncaused the loadavg to skyrocket. For anyone that has ever used sendmail,\nthey will know that a high load will cause sendmail to essentially shut\nitself down, queuing only up to a certain point, refusing connections\nafter that ... queuing is at a loadavg of 15, refusing connections at 20,\nthe machine was slightly higher then that ...\n\nJust checked the queue, and now that the load is back down, the queue is\npretty much flushed out again ...\n\n\n",
"msg_date": "Mon, 12 Jun 2000 14:12:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\nYes, it would need to work as you describe below. Such a scheme is used\nin several object databases I know of. (Versant being one) where it\nworks great. It's not just useful for drop column, but also things like\nadd column with default value. It means that you can add and drop\ncolumns to your hearts content in the blink of an eye, and yet\nultimately not pay the price in terms of storage costs.\n\nBut yep, it's a lot more work, and understandable if there isn't\nenthusiasm for doing it.\n\nTom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > I don't know if this is one of the 5, but my idea of a good\n> > implementation is to do the fast invisible approach, and then update\n> > individual tuples to the new format the next time they happen to be\n> > UPDATEd.\n> \n> How would you tell whether a particular tuple has been updated or not?\n> \n> Furthermore, how would you remember the old tuple format (or formats)\n> so that you'd know how to make the conversion?\n> \n> Seems to me this approach would require adding some sort of table\n> version number to every tuple header, plus storing a complete set of\n> system catalog entries for every past version of each table's schema.\n> That's a heck of a high price, in both storage and complexity, for a\n> feature of dubious value...\n",
"msg_date": "Tue, 13 Jun 2000 10:04:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n\n> I don't understand inheritance well. In the near future wouldn't the\n> implementation require e.g. attid which is common to all children\n> of a parent and is never changed ? If so,we would need the third\n> attid field which is irrevalent to physical/logical position. If not,\n> physical column number would be sufficient .\n\nWe only need something like a unique attid of course if we support\ncolumn renaming in child tables. Otherwise the attname is sufficient to\nmatch up child-parent columns.\n\nIf/when we support renaming, probably a parent_column_oid in\npg_attribute might be one way to go.\n\nYour idea seems fine Tom.\n",
"msg_date": "Tue, 13 Jun 2000 10:16:24 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> -----Original Message-----\n> From: Chris Bitmead\n> \n> Hiroshi Inoue wrote:\n> \n> > I don't understand inheritance well. In the near future wouldn't the\n> > implementation require e.g. attid which is common to all children\n> > of a parent and is never changed ? If so,we would need the third\n> > attid field which is irrevalent to physical/logical position. If not,\n> > physical column number would be sufficient .\n> \n> We only need something like a unique attid of course if we support\n> column renaming in child tables. Otherwise the attname is sufficient to\n> match up child-parent columns.\n>\n\nThere are some objects which keep plans etc as compiled\nstate.\n\ncreate table t1 (i1 int4);\ncreate table t2 (i2 int4) inherits t1;\ncreate table t3 (i3 int4) inherits t2;\nalter table t1 add column i4 int4;\n\nFor each table,the list of (column, logical number, physical number)\nwould be as follows.\n\nt1 (i1, 1, 1) (i4, 2, 2)\nt2 (i1, 1, 1) (i4, 2, 3) (i2, 3, 2)\nt3 (i1, 1, 1) (i4, 2, 4) (i2, 3, 2) (i3, 4, 3)\n\nAt this point the compilation of 'select * from t1(*?)' would mean\n\tselect (physical #1),(physical #2) from t1 +\n\tselect (physical #1),(physical #3) from t2 +\n\tselect (physical #1),(physical #4) from t3 \n\nNote that physical # aren't common for column i4.\nI've wanted to confirm that above compilation would be OK for \nthe (near) future enhancement of inheritance functionality.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 13 Jun 2000 12:08:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\nOn thinking about it I can definitely see your point about wanting an\nattrid that is common across the hierarchy, regardless of compiled\nplans. There would be some merit in splitting up pg_attribute into two\nparts. One part is common across all classes in the hierarchy, the other\npart is specific to one class. Then the oid of the common part is the\nattrid you refer to.\n\nHowever, I'm not sure this directly affects Tom's proposal. Selects from\nhierarchies are implemented in terms of a union of all the classes in\nthe hierarchy. Wouldn't the compiled plan refer to physical ids? In any\ncase, if UNION can be made to work, I would think select hierarchies\nautomatically would work too.\n\nHiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: Chris Bitmead\n> >\n> > Hiroshi Inoue wrote:\n> >\n> > > I don't understand inheritance well. In the near future wouldn't the\n> > > implementation require e.g. attid which is common to all children\n> > > of a parent and is never changed ? If so,we would need the third\n> > > attid field which is irrevalent to physical/logical position. If not,\n> > > physical column number would be sufficient .\n> >\n> > We only need something like a unique attid of course if we support\n> > column renaming in child tables. Otherwise the attname is sufficient to\n> > match up child-parent columns.\n> >\n> \n> There are some objects which keep plans etc as compiled\n> state.\n> \n> create table t1 (i1 int4);\n> create table t2 (i2 int4) inherits t1;\n> create table t3 (i3 int4) inherits t2;\n> alter table t1 add column i4 int4;\n> \n> For each table,the list of (column, logical number, physical number)\n> would be as follows.\n> \n> t1 (i1, 1, 1) (i4, 2, 2)\n> t2 (i1, 1, 1) (i4, 2, 3) (i2, 3, 2)\n> t3 (i1, 1, 1) (i4, 2, 4) (i2, 3, 2) (i3, 4, 3)\n> \n> At this point the compilation of 'select * from t1(*?)' would mean\n> select (physical #1),(physical #2) from t1 +\n> select (physical #1),(physical #3) from t2 +\n> select (physical #1),(physical #4) from t3\n> \n> Note that physical # aren't common for column i4.\n> I've wanted to confirm that above compilation would be OK for\n> the (near) future enhancement of inheritance functionality.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n",
"msg_date": "Tue, 13 Jun 2000 13:19:57 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> create table t1 (i1 int4);\n> create table t2 (i2 int4) inherits t1;\n> create table t3 (i3 int4) inherits t2;\n> alter table t1 add column i4 int4;\n\n> For each table,the list of (column, logical number, physical number)\n> would be as follows.\n\n> t1 (i1, 1, 1) (i4, 2, 2)\n> t2 (i1, 1, 1) (i4, 2, 3) (i2, 3, 2)\n> t3 (i1, 1, 1) (i4, 2, 4) (i2, 3, 2) (i3, 4, 3)\n\n> At this point the compilation of 'select * from t1(*?)' would mean\n> \tselect (physical #1),(physical #2) from t1 +\n> \tselect (physical #1),(physical #3) from t2 +\n> \tselect (physical #1),(physical #4) from t3 \n\n> Note that physical # aren't common for column i4.\n\nThat's no different from the current situation: the planner already must\n(and does) adjust column numbers for each derived table while expanding\nan inherited query. It's kind of a pain but hardly an insurmountable\nproblem.\n\nCurrently the matching is done by column name. We could possibly match\non logical column position instead --- not sure if that's better or\nworse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Jun 2000 23:54:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > create table t1 (i1 int4);\n> > create table t2 (i2 int4) inherits t1;\n> > create table t3 (i3 int4) inherits t2;\n> > alter table t1 add column i4 int4;\n> \n> > For each table,the list of (column, logical number, physical number)\n> > would be as follows.\n> \n> > t1 (i1, 1, 1) (i4, 2, 2)\n> > t2 (i1, 1, 1) (i4, 2, 3) (i2, 3, 2)\n> > t3 (i1, 1, 1) (i4, 2, 4) (i2, 3, 2) (i3, 4, 3)\n> \n> > At this point the compilation of 'select * from t1(*?)' would mean\n> > \tselect (physical #1),(physical #2) from t1 +\n> > \tselect (physical #1),(physical #3) from t2 +\n> > \tselect (physical #1),(physical #4) from t3 \n> \n> > Note that physical # aren't common for column i4.\n> \n> That's no different from the current situation:\n\nYes your proposal has no problem currently. I'm only\nanxious about oo feature. Recently there has been a\ndiscussion around oo and we would be able to expect \nthe progress in the near future. If oo people never mind,\nyour proposal would be OK.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 13 Jun 2000 15:28:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "OK, I am opening this can of worms again. I personally would like to\nsee this code activated, even if it does take 2x the disk space to alter\na column. Hiroshi had other ideas. Where did we leave this? We have\none month to decide on a plan.\n\n\n> Bruce Momjian <[email protected]> writes:\n> > You can exclusively lock the table, then do a heap_getnext() scan over\n> > the entire table, remove the dropped column, do a heap_insert(), then a\n> > heap_delete() on the current tuple, making sure to skip over the tuples\n> > inserted by the current transaction. When completed, remove the column\n> > from pg_attribute, mark the transaction as committed (if desired), and\n> > run vacuum over the table to remove the deleted rows.\n> \n> Hmm, that would work --- the new tuples commit at the same instant that\n> the schema updates commit, so it should be correct. You have the 2x\n> disk usage problem, but there's no way around that without losing\n> rollback ability.\n> \n> A potentially tricky bit will be persuading the tuple-reading and tuple-\n> writing subroutines to pay attention to different versions of the tuple\n> structure for the same table. I haven't looked to see if this will be\n> difficult or not. If you can pass the TupleDesc explicitly then it\n> shouldn't be a problem.\n> \n> I'd suggest that the cleanup vacuum *not* be an automatic part of\n> the operation; just recommend that people do it ASAP after dropping\n> a column. Consider needing to drop several columns...\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Sep 2000 22:32:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, I am opening this can of worms again. I personally would like to\n> see this code activated, even if it does take 2x the disk space to alter\n> a column. Hiroshi had other ideas. Where did we leave this? We have\n> one month to decide on a plan.\n\nI think the plan should be to do nothing for 7.1. ALTER DROP COLUMN\nisn't an especially pressing feature, and so I don't feel that we\nshould be hustling to squeeze it in just before beta. We're already\noverdue for beta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Sep 2000 23:37:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n>\n> Bruce Momjian <[email protected]> writes:\n> > OK, I am opening this can of worms again. I personally would like to\n> > see this code activated, even if it does take 2x the disk space to alter\n> > a column. Hiroshi had other ideas. Where did we leave this? We have\n> > one month to decide on a plan.\n>\n> I think the plan should be to do nothing for 7.1. ALTER DROP COLUMN\n> isn't an especially pressing feature, and so I don't feel that we\n> should be hustling to squeeze it in just before beta. We're already\n> overdue for beta.\n>\n\nSeems some people expect the implementation in 7.1.\n(recent [GENERAL} drop column?)\nI could commit my local branch if people don't mind\nbackward incompatibility.\nI've maintained the branch for more than 1 month\nand it implements the following TODOs.\n\n* Add ALTER TABLE DROP COLUMN feature\n* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n* Prevent column dropping if column is used by foreign key\n\nComments ?\n\nHiroshi Inoue\n\nP.S. I've noticed that get_rte_attribute_name() seems to\nbreak my implementation. I'm not sure if I could solve it.\n\n",
"msg_date": "Fri, 6 Oct 2000 03:12:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Seems some people expect the implementation in 7.1.\n> (recent [GENERAL} drop column?)\n> I could commit my local branch if people don't mind\n> backward incompatibility.\n\nI've lost track --- is this different from the _DROP_COLUMN_HACK__\ncode that's already in CVS? I really really didn't like that\nimplementation :-(, but I forget what other methods were being\ndiscussed.\n\n> P.S. I've noticed that get_rte_attribute_name() seems to\n> break my implementation. I'm not sure if I could solve it.\n\nThat would be a problem --- rule dumping depends on that code to\nproduce correct aliases, so making it work is not optional.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Oct 2000 15:28:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Seems some people expect the implementation in 7.1.\n> > (recent [GENERAL} drop column?)\n> > I could commit my local branch if people don't mind\n> > backward incompatibility.\n>\n> I've lost track --- is this different from the _DROP_COLUMN_HACK__\n> code that's already in CVS? I really really didn't like that\n> implementation :-(, but I forget what other methods were being\n> discussed.\n>\n\nMy current local trial implementation follows your idea(logical/\nphysical attribute numbers).\n\n\n> > P.S. I've noticed that get_rte_attribute_name() seems to\n> > break my implementation. I'm not sure if I could solve it.\n>\n> That would be a problem --- rule dumping depends on that code to\n> produce correct aliases, so making it work is not optional.\n>\n\nYour change has no problem if logical==physical attribute\nnumbers.\n\nRegards.\n\nHiroshi Inoue\n\n",
"msg_date": "Fri, 06 Oct 2000 08:12:14 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Hiroshi Inoue <[email protected]> writes:\n>>>> P.S. I've noticed that get_rte_attribute_name() seems to\n>>>> break my implementation. I'm not sure if I could solve it.\n>> \n>> That would be a problem --- rule dumping depends on that code to\n>> produce correct aliases, so making it work is not optional.\n\n> Your change has no problem if logical==physical attribute\n> numbers.\n\nBut if they're not, what do we do? Can we define the order of the\nalias-name lists as being one or the other numbering? (Offhand I'd\nsay it should be logical numbering, but I haven't chased the details.)\nIf neither of those work, we'll need some more complex datastructure\nthan a simple list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Oct 2000 22:42:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <[email protected]> writes:\n> >>>> P.S. I've noticed that get_rte_attribute_name() seems to\n> >>>> break my implementation. I'm not sure if I could solve it.\n> >>\n> >> That would be a problem --- rule dumping depends on that code to\n> >> produce correct aliases, so making it work is not optional.\n>\n> > Your change has no problem if logical==physical attribute\n> > numbers.\n>\n> But if they're not, what do we do? Can we define the order of the\n> alias-name lists as being one or the other numbering? (Offhand I'd\n> say it should be logical numbering, but I haven't chased the details.)\n> If neither of those work, we'll need some more complex datastructure\n> than a simple list.\n>\n\nI'm not sure if we could keep invariant attribute numbers.\nThough I've used physical attribute numbers as many as possible\nin my trial implementation,there's already an exception.\nI had to use logical attribute numbers for FieldSelect node.\n\nRegards.\n\nHiroshi Inoue\n\n\n",
"msg_date": "Fri, 06 Oct 2000 12:05:58 +0900",
"msg_from": "Hiroshi Inoue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 12:05 6/10/00 +0900, Hiroshi Inoue wrote:\n>\n>Tom Lane wrote:\n>\n>> Hiroshi Inoue <[email protected]> writes:\n>> >>>> P.S. I've noticed that get_rte_attribute_name() seems to\n>> >>>> break my implementation. I'm not sure if I could solve it.\n>> >>\n>> >> That would be a problem --- rule dumping depends on that code to\n>> >> produce correct aliases, so making it work is not optional.\n>>\n>> > Your change has no problem if logical==physical attribute\n>> > numbers.\n>>\n>> But if they're not, what do we do? Can we define the order of the\n>> alias-name lists as being one or the other numbering? (Offhand I'd\n>> say it should be logical numbering, but I haven't chased the details.)\n>> If neither of those work, we'll need some more complex datastructure\n>> than a simple list.\n>>\n>\n>I'm not sure if we could keep invariant attribute numbers.\n>Though I've used physical attribute numbers as many as possible\n>in my trial implementation,there's already an exception.\n>I had to use logical attribute numbers for FieldSelect node.\n>\n\nNot really a useful suggestion at this stage, but it seems to me that\nstoring plans and/or parse trees is possibly a false economy. Would it be\nworth considering storing the relevant SQL (or a parse tree with field &\ntable names) and compiling the rule in each backend the first time it is\nused? (and keep it for the life of the backend).\n\nThis would allow underlying view tables to be deleted/added as well as make\nthe above problem go away. The 'parse tree with names' would also enable\neasy construction of dependency information when and if that is implemented...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 06 Oct 2000 14:26:24 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "\nseconded ...\n\nOn Fri, 29 Sep 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > OK, I am opening this can of worms again. I personally would like to\n> > see this code activated, even if it does take 2x the disk space to alter\n> > a column. Hiroshi had other ideas. Where did we leave this? We have\n> > one month to decide on a plan.\n> \n> I think the plan should be to do nothing for 7.1. ALTER DROP COLUMN\n> isn't an especially pressing feature, and so I don't feel that we\n> should be hustling to squeeze it in just before beta. We're already\n> overdue for beta.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 7 Oct 2000 21:07:01 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Thu, 5 Oct 2000, Tom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Seems some people expect the implementation in 7.1.\n> > (recent [GENERAL} drop column?)\n> > I could commit my local branch if people don't mind\n> > backward incompatibility.\n\nthere have been several ideas thrown back and forth ... the best one that\nI saw, forgetting who suggested it, had to do with the idea of locking the\ntable and doing an effective vacuum on that table with a 'row re-write'\nhappening ...\n\nBasically, move the first 100 rows to the end of the table file, then take\n100 and write it to position 0, 101 to position 1, etc ... that way, at\nmax, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\nsize ... either method is going to lock the file for a period of time, but\none is much more friendly as far as disk space is concerned *plus*, if RAM\nis available for this, it might even be something that the backend could\nuse up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\nthe table is 24Meg in size, it could do it all in memory?\n\n\n",
"msg_date": "Sat, 7 Oct 2000 21:11:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> Basically, move the first 100 rows to the end of the table file, then take\n> 100 and write it to position 0, 101 to position 1, etc ... that way, at\n> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\n> size ... either method is going to lock the file for a period of time, but\n> one is much more friendly as far as disk space is concerned *plus*, if RAM\n> is available for this, it might even be something that the backend could\n> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\n> the table is 24Meg in size, it could do it all in memory?\n\nYes, I liked that too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 13:32:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Basically, move the first 100 rows to the end of the table file, then take\n>> 100 and write it to position 0, 101 to position 1, etc ... that way, at\n>> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\n>> size ... either method is going to lock the file for a period of time, but\n>> one is much more friendly as far as disk space is concerned *plus*, if RAM\n>> is available for this, it might even be something that the backend could\n>> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\n>> the table is 24Meg in size, it could do it all in memory?\n\n> Yes, I liked that too.\n\nWhat happens if you crash partway through?\n\nI don't think it's possible to build a crash-robust rewriting ALTER\nprocess that doesn't use 2X disk space: you must have all the old tuples\nAND all the new tuples down on disk simultaneously just before you\ncommit. The only way around 2X disk space is to adopt some logical\nrenumbering approach to the columns, so that you can pretend the dropped\ncolumn isn't there anymore when it really still is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2000 13:37:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Basically, move the first 100 rows to the end of the table file, then take\n> >> 100 and write it to position 0, 101 to position 1, etc ... that way, at\n> >> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\n> >> size ... either method is going to lock the file for a period of time, but\n> >> one is much more friendly as far as disk space is concerned *plus*, if RAM\n> >> is available for this, it might even be something that the backend could\n> >> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\n> >> the table is 24Meg in size, it could do it all in memory?\n> \n> > Yes, I liked that too.\n> \n> What happens if you crash partway through?\n> \n> I don't think it's possible to build a crash-robust rewriting ALTER\n> process that doesn't use 2X disk space: you must have all the old tuples\n> AND all the new tuples down on disk simultaneously just before you\n> commit. The only way around 2X disk space is to adopt some logical\n> renumbering approach to the columns, so that you can pretend the dropped\n> column isn't there anymore when it really still is.\n\nYes, I liked the 2X disk space, and making the new tuples visible all at\nonce at the end.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 13:40:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 9 Oct 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> Basically, move the first 100 rows to the end of the table file, then take\n> >> 100 and write it to position 0, 101 to position 1, etc ... that way, at\n> >> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\n> >> size ... either method is going to lock the file for a period of time, but\n> >> one is much more friendly as far as disk space is concerned *plus*, if RAM\n> >> is available for this, it might even be something that the backend could\n> >> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\n> >> the table is 24Meg in size, it could do it all in memory?\n> \n> > Yes, I liked that too.\n> \n> What happens if you crash partway through?\n\nwhat happens if you crash partway through a vacuum?\n\n> I don't think it's possible to build a crash-robust rewriting ALTER\n> process that doesn't use 2X disk space: you must have all the old\n> tuples AND all the new tuples down on disk simultaneously just before\n> you commit. The only way around 2X disk space is to adopt some\n> logical renumbering approach to the columns, so that you can pretend\n> the dropped column isn't there anymore when it really still is.\n\nhow about a combination of the two? basically, we're gonna want a vacuum\nof the table after the alter to clean out those extra columns that we've\nmarked as 'dead' ... basically, anything that avoids tht whole 2x disk\nspace option is cool ...\n\n\n",
"msg_date": "Mon, 9 Oct 2000 16:57:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Mon, 9 Oct 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > >> Basically, move the first 100 rows to the end of the table file, then take\n> > >> 100 and write it to position 0, 101 to position 1, etc ... that way, at\n> > >> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table\n> > >> size ... either method is going to lock the file for a period of time, but\n> > >> one is much more friendly as far as disk space is concerned *plus*, if RAM\n> > >> is available for this, it might even be something that the backend could\n> > >> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and\n> > >> the table is 24Meg in size, it could do it all in memory?\n> > \n> > > Yes, I liked that too.\n> > \n> > What happens if you crash partway through?\n> > \n> > I don't think it's possible to build a crash-robust rewriting ALTER\n> > process that doesn't use 2X disk space: you must have all the old tuples\n> > AND all the new tuples down on disk simultaneously just before you\n> > commit. The only way around 2X disk space is to adopt some logical\n> > renumbering approach to the columns, so that you can pretend the dropped\n> > column isn't there anymore when it really still is.\n> \n> Yes, I liked the 2X disk space, and making the new tuples visible all at\n> once at the end.\n\nman, are you ever wishy-washy on this issue, aren't you? :) you like not\nusing 2x, you like using 2x ... :)\n\n\n",
"msg_date": "Mon, 9 Oct 2000 16:58:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> What happens if you crash partway through?\n\n> what happens if you crash partway through a vacuum?\n\nNothing. Vacuum is crash-safe. ALTER TABLE should be too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2000 16:19:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Mon, 9 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> What happens if you crash partway through?\n> \n> > what happens if you crash partway through a vacuum?\n> \n> Nothing. Vacuum is crash-safe. ALTER TABLE should be too.\n\nSorry, that's what I meant ... why should marking a column as 'deleted'\nand running a 'vacuum' to clean up the physical table be any less\ncrash-safe? \n\n",
"msg_date": "Mon, 9 Oct 2000 17:30:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> On Mon, 9 Oct 2000, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > >> What happens if you crash partway through?\n> > \n> > > what happens if you crash partway through a vacuum?\n> > \n> > Nothing. Vacuum is crash-safe. ALTER TABLE should be too.\n> \n> Sorry, that's what I meant ... why should marking a column as 'deleted'\n> and running a 'vacuum' to clean up the physical table be any less\n> crash-safe? \n\nIt is not. The only downside is 2x disk space to make new versions of\nthe tuple.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 16:35:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 9 Oct 2000, Bruce Momjian wrote:\n\n> > On Mon, 9 Oct 2000, Tom Lane wrote:\n> > \n> > > The Hermit Hacker <[email protected]> writes:\n> > > >> What happens if you crash partway through?\n> > > \n> > > > what happens if you crash partway through a vacuum?\n> > > \n> > > Nothing. Vacuum is crash-safe. ALTER TABLE should be too.\n> > \n> > Sorry, that's what I meant ... why should marking a column as 'deleted'\n> > and running a 'vacuum' to clean up the physical table be any less\n> > crash-safe? \n> \n> It is not. The only downside is 2x disk space to make new versions of\n> the tuple.\n\nhuh? vacuum moves/cleans up tuples, as well as compresses them, so that\nthe end result is a smaller table then what it started with, at/with very\nlittle increase in the total size/space needed to perform the vacuum ...\n\nif we reduced vacuum such that it compressed at the field level vs tuple,\nwe could move a few tuples to the end of the table (crash safe) and then\nmove N+1 to position 1 minus that extra field. If we mark the column as\nbeing deleted, then if the system crashes part way through, it should be\npossible to continue after the system is brought up, no?\n\n\n",
"msg_date": "Mon, 9 Oct 2000 18:04:15 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> > > Sorry, that's what I meant ... why should marking a column as 'deleted'\n> > > and running a 'vacuum' to clean up the physical table be any less\n> > > crash-safe? \n> > \n> > It is not. The only downside is 2x disk space to make new versions of\n> > the tuple.\n> \n> huh? vacuum moves/cleans up tuples, as well as compresses them, so that\n> the end result is a smaller table then what it started with, at/with very\n> little increase in the total size/space needed to perform the vacuum ...\n> \n> if we reduced vacuum such that it compressed at the field level vs tuple,\n> we could move a few tuples to the end of the table (crash safe) and then\n> move N+1 to position 1 minus that extra field. If we mark the column as\n> being deleted, then if the system crashes part way through, it should be\n> possible to continue after the system is brought up, no?\n\nIf it crashes in the middle, some rows have the column removed, and some\ndo not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 17:16:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 9 Oct 2000, Bruce Momjian wrote:\n\n> > > > Sorry, that's what I meant ... why should marking a column as 'deleted'\n> > > > and running a 'vacuum' to clean up the physical table be any less\n> > > > crash-safe? \n> > > \n> > > It is not. The only downside is 2x disk space to make new versions of\n> > > the tuple.\n> > \n> > huh? vacuum moves/cleans up tuples, as well as compresses them, so that\n> > the end result is a smaller table then what it started with, at/with very\n> > little increase in the total size/space needed to perform the vacuum ...\n> > \n> > if we reduced vacuum such that it compressed at the field level vs tuple,\n> > we could move a few tuples to the end of the table (crash safe) and then\n> > move N+1 to position 1 minus that extra field. If we mark the column as\n> > being deleted, then if the system crashes part way through, it should be\n> > possible to continue after the system is brought up, no?\n> \n> If it crashes in the middle, some rows have the column removed, and some\n> do not.\n\nhrmm .. mvcc uses a timestamp, no? is there no way of using that\ntimestamp to determine which columns have/haven't been cleaned up\nfollowing a crash? maybe some way of marking a table as being in a 'drop\ncolumn' mode, so that when it gets brought back up again, it is scan'd for\nany tuples older then that date? \n\n",
"msg_date": "Mon, 9 Oct 2000 18:36:05 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> It is not. The only downside is 2x disk space to make new versions of\n>> the tuple.\n\n> huh? vacuum moves/cleans up tuples, as well as compresses them, so that\n> the end result is a smaller table then what it started with, at/with very\n> little increase in the total size/space needed to perform the vacuum ...\n\nHuh? right back at you ;-). Vacuum is very careful to make sure that\nit always has two copies of any tuple it moves. The reason it's not 2x\ndisk space is that it only moves tuples to fill free space in existing\npages of the file. So the moved tuples take up space-that-was-free as\nwell as the space they were originally in. But this has nothing\nwhatever to do with the requirements of ALTER DROP COLUMN --- to be\nsafe, that must have two copies of every tuple, free space or no free\nspace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2000 17:43:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> timestamp to determine which columns have/haven't been cleaned up\n> following a crash? maybe some way of marking a table as being in a 'drop\n> column' mode, so that when it gets brought back up again, it is scan'd for\n> any tuples older then that date? \n\nWAL would provide the framework to do something like that, but I still\nsay it'd be a bad idea. What you're describing is\nirrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\nWe're trying to get rid of statements that act that way, not add more.\n\nI am not convinced that a 2x penalty for DROP COLUMN is such a huge\nproblem that we should give up all the normal safety features of SQL\nin order to avoid it. Seems to me that DROP COLUMN is only a big issue\nduring DB development, when you're usually working with relatively small\namounts of test data anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2000 18:09:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "On Mon, 9 Oct 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> > timestamp to determine which columns have/haven't been cleaned up\n> > following a crash? maybe some way of marking a table as being in a 'drop\n> > column' mode, so that when it gets brought back up again, it is scan'd for\n> > any tuples older then that date? \n> \n> WAL would provide the framework to do something like that, but I still\n> say it'd be a bad idea. What you're describing is\n> irrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\n> We're trying to get rid of statements that act that way, not add more.\n\nHrmmmm ... this one I can't really argue, or, at least, can't think of\nanything right now :( \n\n> I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> problem that we should give up all the normal safety features of SQL\n> in order to avoid it. Seems to me that DROP COLUMN is only a big\n> issue during DB development, when you're usually working with\n> relatively small amounts of test data anyway.\n\nActually, I could see DROP COLUMN being useful in a few other places\n... recently, I spent several hours re-structuring a clients database that\nhad been built by someone else who didn't know what 'relational' means in\nRDBMS ... or how about an application developer that decides to\nrestructure their schema's in a new release and provides an 'upgrade.sql'\nscript that is designed to do this? \n\nA good example might be the UDMSearch stuff, where you have tables that\nare quite large, but they decide that they want to remove the 'base URL'\ncomponent' of one table and put it into another table? a nice update\nscript could go something like (pseudo like):\n\nADD COLUMN base_url int;\nINSERT INTO new_table SELECT base_url_text FROM table;\nDROP COLUMN base_url_text;\n\nThat would make for a very painful upgrade process if I have to go through\nthe trouble of upgrading my hardware to add more space ...\n\n",
"msg_date": "Mon, 9 Oct 2000 19:55:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> > timestamp to determine which columns have/haven't been cleaned up\n> > following a crash? maybe some way of marking a table as being in a 'drop\n> > column' mode, so that when it gets brought back up again, it is scan'd for\n> > any tuples older then that date? \n> \n> WAL would provide the framework to do something like that, but I still\n> say it'd be a bad idea. What you're describing is\n> irrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\n> We're trying to get rid of statements that act that way, not add more.\n> \n> I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> problem that we should give up all the normal safety features of SQL\n> in order to avoid it. Seems to me that DROP COLUMN is only a big issue\n> during DB development, when you're usually working with relatively small\n> amounts of test data anyway.\n> \n\nBingo!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 19:22:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 9 Oct 2000, Bruce Momjian wrote:\n\n> > The Hermit Hacker <[email protected]> writes:\n> > > hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> > > timestamp to determine which columns have/haven't been cleaned up\n> > > following a crash? maybe some way of marking a table as being in a 'drop\n> > > column' mode, so that when it gets brought back up again, it is scan'd for\n> > > any tuples older then that date? \n> > \n> > WAL would provide the framework to do something like that, but I still\n> > say it'd be a bad idea. What you're describing is\n> > irrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\n> > We're trying to get rid of statements that act that way, not add more.\n> > \n> > I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> > problem that we should give up all the normal safety features of SQL\n> > in order to avoid it. Seems to me that DROP COLUMN is only a big issue\n> > during DB development, when you're usually working with relatively small\n> > amounts of test data anyway.\n> > \n> \n> Bingo!\n\nyou are jumping on your 'I agree/Bingo' much much too fast :) \n\n\n",
"msg_date": "Mon, 9 Oct 2000 20:38:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> On Mon, 9 Oct 2000, Bruce Momjian wrote:\n> \n> > > The Hermit Hacker <[email protected]> writes:\n> > > > hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> > > > timestamp to determine which columns have/haven't been cleaned up\n> > > > following a crash? maybe some way of marking a table as being in a 'drop\n> > > > column' mode, so that when it gets brought back up again, it is scan'd for\n> > > > any tuples older then that date? \n> > > \n> > > WAL would provide the framework to do something like that, but I still\n> > > say it'd be a bad idea. What you're describing is\n> > > irrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\n> > > We're trying to get rid of statements that act that way, not add more.\n> > > \n> > > I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> > > problem that we should give up all the normal safety features of SQL\n> > > in order to avoid it. Seems to me that DROP COLUMN is only a big issue\n> > > during DB development, when you're usually working with relatively small\n> > > amounts of test data anyway.\n> > > \n> > \n> > Bingo!\n> \n> you are jumping on your 'I agree/Bingo' much much too fast :) \n\nYou know this DROP COLUMN is a hot button for me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 19:46:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "On Mon, 9 Oct 2000, Bruce Momjian wrote:\n\n> > On Mon, 9 Oct 2000, Bruce Momjian wrote:\n> > \n> > > > The Hermit Hacker <[email protected]> writes:\n> > > > > hrmm .. mvcc uses a timestamp, no? is there no way of using that\n> > > > > timestamp to determine which columns have/haven't been cleaned up\n> > > > > following a crash? maybe some way of marking a table as being in a 'drop\n> > > > > column' mode, so that when it gets brought back up again, it is scan'd for\n> > > > > any tuples older then that date? \n> > > > \n> > > > WAL would provide the framework to do something like that, but I still\n> > > > say it'd be a bad idea. What you're describing is\n> > > > irrevocable-once-it-starts DROP COLUMN; there is no way to roll it back.\n> > > > We're trying to get rid of statements that act that way, not add more.\n> > > > \n> > > > I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> > > > problem that we should give up all the normal safety features of SQL\n> > > > in order to avoid it. Seems to me that DROP COLUMN is only a big issue\n> > > > during DB development, when you're usually working with relatively small\n> > > > amounts of test data anyway.\n> > > > \n> > > \n> > > Bingo!\n> > \n> > you are jumping on your 'I agree/Bingo' much much too fast :) \n> \n> You know this DROP COLUMN is a hot button for me.\n\nYa, but in one email, you appear to agree with me ... then Tom posts a\ngood point and you jump over to that side ... at least pick a side? :) I\ntoo wish to see it implemented, I just don't want to have to double my\ndisk space if at some point I decide to upgrade an application and find\nout that they decided to change their schema(s) :(\n\n\n",
"msg_date": "Mon, 9 Oct 2000 21:02:46 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "> > > > > I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n> > > > > problem that we should give up all the normal safety features of SQL\n> > > > > in order to avoid it. Seems to me that DROP COLUMN is only a big issue\n> > > > > during DB development, when you're usually working with relatively small\n> > > > > amounts of test data anyway.\n> > > > > \n> > > > \n> > > > Bingo!\n> > > \n> > > you are jumping on your 'I agree/Bingo' much much too fast :) \n> > \n> > You know this DROP COLUMN is a hot button for me.\n> \n> Ya, but in one email, you appear to agree with me ... then Tom posts a\n> good point and you jump over to that side ... at least pick a side? :) I\n> too wish to see it implemented, I just don't want to have to double my\n> disk space if at some point I decide to upgrade an application and find\n> out that they decided to change their schema(s) :(\n\nSorry, I liked the vacuum idea, but 2x disk usage, not 100 at a time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 20:05:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
},
{
"msg_contents": "At 07:55 PM 10/9/00 -0300, The Hermit Hacker wrote:\n\n>> I am not convinced that a 2x penalty for DROP COLUMN is such a huge\n>> problem that we should give up all the normal safety features of SQL\n>> in order to avoid it. Seems to me that DROP COLUMN is only a big\n>> issue during DB development, when you're usually working with\n>> relatively small amounts of test data anyway.\n>\n>Actually, I could see DROP COLUMN being useful in a few other places\n>... recently, I spent several hours re-structuring a clients database that\n>had been built by someone else who didn't know what 'relational' means in\n>RDBMS ... or how about an application developer that decides to\n>restructure their schema's in a new release and provides an 'upgrade.sql'\n>script that is designed to do this? \n\nThis last example is one reason DROP COLUMN would be a great help to\nthe OpenACS development effort.\n\nHowever, upgrades (new releases) are fairly infrequent, and many users of\ncurrent versions won't bother unless they've run into toolkit bugs\n(same goes for updating PG). Those who do know that doing an upgrade\nwill require planning, testing on a system that's not running their\n\"live\" website, and some amount of downtime.\n\nSo I don't think a 2x penalty is a huge problem.\n\n>That would make for a very painful upgrade process if I have to go through\n>the trouble of upgrading my hardware to add more space ...\n\nFor many folks, if eating 2x the size of a single table runs their system out\nof disk space, clearly they should've upgraded long, long ago. An OpenACS\nsite has hundreds of tables, I can't imagine running my disk space so tight\nthat I couldn't double the size of one of them long enough to do a DROP\nCOLUMN.\n\nObviously, some folks doing other things will have single tables that are\nhuge,\nbut after all they can always do what they do now - not drop columns.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 09 Oct 2000 17:30:20 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN "
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 9 Oct 2000, Bruce Momjian wrote:\n> \n> Ya, but in one email, you appear to agree with me ... then Tom posts a\n> good point and you jump over to that side ... at least pick a side? :) I\n> too wish to see it implemented, I just don't want to have to double my\n> disk space if at some point I decide to upgrade an application and find\n> out that they decided to change their schema(s) :(\n\nAs Don already pointed out, if you don't have enough room to double your \ntable size you must be running an one-table, append-only application where \nyou can only do a very limited set of queries. \n\nselect * from that_table order by some_column_without_an_index; is definitely \nout as it takes many times the space of a that_table anyway.\n\nThere _may_ be some cases where 2x is unacceptable, but without radically \nchanging tables on-disk structure there is no way to avoid it and still be \nable to rollback or even crash cleanly ;)\n\n-----------------\nHannu\n",
"msg_date": "Tue, 10 Oct 2000 14:02:03 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE DROP COLUMN"
}
] |
[
{
"msg_contents": "How does postgresql perform queries on one table using more than one\nindex? For example, assuming the following:\n\ncreate table t1 ( f1 int, f2 int);\ncreate index t1_f1 on t1 (f1);\ncreate index t1_f2 on t1 (f2);\nselect * from t1 where f1=123 and f2=456;\n\nBy default, both indices will be btree's making them sorted. Therefore,\ndoes postgresql retrieve all matching records from t1_f1 and t1_f2 into\nintermediate tables and then performs somekind of merge sort before\nretrieving the final results from t1? If so or if not, are intermediate\nfiles created for this kind of operation or does the postgresql support\nqueries to multiple fields directly in its indexing system (perhaps aided\nby \"analyze\")? Or, does this kind of operation rely much on memory?\n\nI have tried making heads or tails out of the source code, but postgresql\nis far more daunting than I had expected. Nevertheless, for future\nreference, how could I find answers to questions about query management by\npostgresql?\n\nMany thanks for your excellent ordbms,\nMarc Tardif\n\n",
"msg_date": "Thu, 17 Feb 2000 23:27:50 +0000 (GMT)",
"msg_from": "Marc Tardif <[email protected]>",
"msg_from_op": true,
"msg_subject": "queries on 2+ indices"
},
{
"msg_contents": "Marc Tardif <[email protected]> writes:\n> How does postgresql perform queries on one table using more than one\n> index?\n\nIt doesn't. Simple enough, eh?\n\n For example, assuming the following:\n\n> create table t1 ( f1 int, f2 int);\n> create index t1_f1 on t1 (f1);\n> create index t1_f2 on t1 (f2);\n> select * from t1 where f1=123 and f2=456;\n\nThe optimizer will attempt to guess which index is more selective\n(will return fewer tuples for its part of the WHERE clause). That\nindex would be used for the indexscan, and the rest of the WHERE\nclause would be applied as a \"qpqual\", ie actually evaluated as\nan expression against each tuple found by the index.\n\nAs you note, there's not any really efficient way to make use of\nindependent indexes to evaluate an AND condition like this one.\nWhile Postgres' approach is pretty simplistic, I'm not sure that\na more-complicated approach would actually be any faster.\n\nIf you have a multi-column index, eg\n\ncreate index t1_f1_f2 on t1 (f1, f2);\n\nthen the system can and will use both clauses of the WHERE with\nthat single index. But again, it's not entirely clear that that's\nall that much faster than just using the more-selective clause\nin a smaller index. Furthermore, a multi-column index is more\nspecialized than single-column indexes because it is useful for\nonly a narrower range of queries; so you have to consider the extra\nwork done at insert/update to manage the extra index, and decide\nif it's really a win overall for your application.\n\n> how could I find answers to questions about query management by\n> postgresql?\n\nAsking questions on the mailing lists isn't a bad way to start.\nSeeing what EXPLAIN says about how queries will be executed is\nanother nice learning tool.\n\nThere is some high-level implementation info in the SGML documentation,\nand more scattered in various README files, but you won't really\nunderstand a lot until you start burrowing into the source code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 01:00:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] queries on 2+ indices "
},
{
"msg_contents": "I'm not sure I understand what is \"qpqual\" in your explanation. Once the\nfirst indexscan is performed, is a temporary table created (in-file or\nin-memory) containing the relevant tuples? If not, how can the remaining\npart of the WHERE clause be evaluated against the previously selected\ntuples during the first indexscan? Or, is the remaining part of the WHERE\nclause re-evaluated again and again for each of the found tuples in the\nfirst indexscan?\n\nOn Fri, 18 Feb 2000, Tom Lane wrote:\n\n> Marc Tardif <[email protected]> writes:\n> \n> > For example, assuming the following:\n> >\n> > create table t1 ( f1 int, f2 int);\n> > create index t1_f1 on t1 (f1);\n> > create index t1_f2 on t1 (f2);\n> > select * from t1 where f1=123 and f2=456;\n> \n> The optimizer will attempt to guess which index is more selective\n> (will return fewer tuples for its part of the WHERE clause). That\n> index would be used for the indexscan, and the rest of the WHERE\n> clause would be applied as a \"qpqual\", ie actually evaluated as\n> an expression against each tuple found by the index.\n> \n> As you note, there's not any really efficient way to make use of\n> independent indexes to evaluate an AND condition like this one.\n> While Postgres' approach is pretty simplistic, I'm not sure that\n> a more-complicated approach would actually be any faster.\n> \n\n",
"msg_date": "Fri, 18 Feb 2000 11:40:40 +0000 (GMT)",
"msg_from": "Marc Tardif <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] queries on 2+ indices "
},
{
"msg_contents": "Marc Tardif <[email protected]> writes:\n> I'm not sure I understand what is \"qpqual\" in your explanation. Once the\n> first indexscan is performed, is a temporary table created (in-file or\n> in-memory) containing the relevant tuples?\n\nNo, it's all done on-the-fly as each tuple is scanned.\n\n> is the remaining part of the WHERE\n> clause re-evaluated again and again for each of the found tuples in the\n> first indexscan?\n\nThat's what I said.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 19:00:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] queries on 2+ indices "
}
] |
[
{
"msg_contents": "There are two questions in the pgsql-admin mailing list which are very often\nasked, but nobody seems to have a reasonable answer to these.\nSo I decided to post them here and I hope anyone can help me (and many\nothers) !\n\n\n - Can a PostgreSQL user change his own password without having\n \"usesuper\" set to 't' in pg_shadow ?\n\n I think this should be possible with a PostgreSQL C function (or\n PL/Tcl) which alters the \"passwd\" attribute of the pg_shadow table and\n after that copies the contents of this table to the pg_pwd ASCII file\n (and generates an empty pg_pwd.reload).\n But I have no idea how to manipulate and copy tables within PostgreSQL\n C functions.\n\n Does anybody know if there is already such a function or any other way\n to achieve the above mentioned functionality ?\n\n\n - Can the Postgres superuser prevent a user from creating tables in any\n database the user likes to ?\n\n It would be good to have a way to restrict the use of the \"CREATE\n TABLE\" SQL statement so that a user can only create tables in\n databases he is explicitly allowed to.\n\n Does anyone have an idea how to solve this problem ?\n\n\nIf these two features are not yet implemented I would suggest to think about\nimplementing them, because I (and many other people) would consider them\nas important, especially in practical use.\n\nPlease let me know !\n\n\nThank you very much\nRP. Dumont\n\n\n\n\n\n",
"msg_date": "Fri, 18 Feb 2000 00:40:23 +0100",
"msg_from": "\"Herr Dumont\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "password change / table creation"
}
] |
[
{
"msg_contents": "RedHat 6.1, Postgresql 6.5.2 (tried on 6.5.3 - same result).\n\n------------------------------- Code:\n\n#include <stdio.h>\n#include <pgsql/libpq-fe.h>\n#include <pgsql/libpq/libpq-fs.h>\n\nint main() {\n PGconn *conn;\n PGresult *res;\n conn = PQsetdb(NULL, NULL, NULL, NULL, \"ctlg\");\n if(PQstatus(conn) == CONNECTION_BAD) {\n printf(\"Cannot connect\\n\");\n exit(1);\n }\n Oid LOid;\n LOid = lo_creat(conn, INV_READ | INV_WRITE);\n if(LOid == 0) {\n printf(\"Cannot create\\n\");\n exit(1);\n }\n printf(\"Created with Oid=%d\\n\", LOid);\n int LOfd;\n LOfd = lo_open(conn, LOid, INV_READ | INV_WRITE);\n printf(\"Opened with LOfd = %d, %s\\n\", LOfd, PQerrorMessage(conn));\n}\n\n----------------------------------- Result:\n\nCreated with Oid=31169\nOpened with LOfd = -1, ERROR: lo_lseek: invalid large obj descriptor\n(0)\n\n----------------------------------- Debug:\n\nFindExec: found \"/usr/bin/postgres\" using argv[0]\n/usr/bin/postmaster: BackendStartup: pid 1012 user root db ctlg socket 4\n\nFindExec: found \"/usr/bin/postgres\" using argv[0]\nstarted: host=localhost user=root database=ctlg\nInitPostgres\nStartTransactionCommand\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nERROR: lo_lseek: invalid large obj descriptor (0)\nAbortCurrentTransaction\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n/usr/bin/postmaster: reaping dead processes...\n/usr/bin/postmaster: CleanupProc: pid 1012 exited with status 0\npmdie 2\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n\n------------------------------------------\n\n",
"msg_date": "Fri, 18 Feb 2000 11:20:48 +0300",
"msg_from": "root <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large object problem in 6.5.2 & 3"
}
] |
[
{
"msg_contents": "\nsubscribe\n_____________________________________________________________\nGUNS N' ROSES.\nEles est�o de volta no novo CD Live Era 87-93. Fundamental para entender\na hist�ria do rock dos anos 90. Fundamental comprar logo. S� R$ 20,90 no Submarino.\nPROMO��O MOUSE VERMELHO\nhttp://www.submarino.com.br/default.asp?franq=100037\n\n_____________________________________________________________\n\n\n",
"msg_date": "Fri, 18 Feb 2000 10:06:16 -0300 (EST)",
"msg_from": "\"Mauricio da Silva Barrios\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hello, people!\n\nI'm having a problem with postgres, and I wanna know if anyone out there \nmight give me some help...\n\nMy problem is this:\n I have a table. When I run a select over it, the result comes OK. BUT, \nif I create a view with the same select, when I run a select on the view, \nif the table has any data my computer explodes on my face! I have made \nother views, and they work fine, but they dont have subqueries with \nreference to its parent query.\n\nHave any of you ever faced this problem, or created a view like mine that \nworked? how do I overcome my troubles??\n\nHere follow my select, my table and all things I did (not much) to \ngenerate the crash.\n\nIf you think you might help me and want more info about my system or \npostgres, mail me.\n\n-------------------------------------------------------------------------\n-------\n[user@server dir]$ psql mydb\nWelcome to POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of PSTGRESQL\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc pgcc-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: mydb\n\nmydb=> CREATE TABLE connection (\n connection_id INT4 primary key,\n connection_owner INT4, -- Foreign Key for users table\n connection_start TIMESTAMP,\n connection_end RELTIME\n);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n'connection_pkey' for table 'connection'\nCREATE\nmydb=> CREATE VIEW connection_last AS\n SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nCREATE\nmydb=> SELECT * FROM connection_last;\nconnection_id|connection_owner|connection_start|connection_end\n-------------+----------------+----------------+--------------\n(0 rows)\n\nmydb=> INSERT INTO connection VALUES (3,'2','22-01-01','122');\nINSERT 172197 1\nmydb=> select * from connection;\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> select * from connection_last;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is \nimpossible.\n Terminating.\n[user@server dir]$ \n-------------------------------------------------------------------------\n-----------\n_____________________________________________________________\nGUNS N' ROSES.\nEles est�o de volta no novo CD Live Era 87-93. Fundamental para entender\na hist�ria do rock dos anos 90. Fundamental comprar logo. S� R$ 20,90 no Submarino.\nPROMO��O MOUSE VERMELHO\nhttp://www.submarino.com.br/default.asp?franq=100037\n\n_____________________________________________________________\n\n\n",
"msg_date": "Fri, 18 Feb 2000 13:33:23 -0300 (EST)",
"msg_from": "\"Mauricio da Silva Barrios\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it a bug?"
},
{
"msg_contents": "\"Mauricio da Silva Barrios\" <[email protected]> writes:\n> mydb=> CREATE VIEW connection_last AS\n> SELECT * FROM connection as con_out\n> WHERE connection_start IN ( SELECT max(connection_start) FROM \n> connection as con_in\n> WHERE con_in.connection_owner = \n> con_out.connection_owner\n> GROUP BY connection_owner);\n> mydb=> select * from connection_last;\n> pqReadData() -- backend closed the channel unexpectedly.\n\nHmm. Seems to work fine in current sources:\n\nregression=# select * from connection_last;\n connection_id | connection_owner | connection_start | connection_end\n---------------+------------------+------------------------+----------------\n 3 | 2 | 2001-01-22 00:00:00-05 | 00:02:02\n(1 row)\n\nI seem to recall that the rule rewriter had some problems dealing with\naggregate functions inside sub-selects in 6.5.*, and that's probably\nwhat's causing your problem.\n\n7.0 is scheduled to go beta next week, so I'd suggest picking up\na beta copy ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Feb 2000 19:14:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is it a bug? "
}
] |
[
{
"msg_contents": "Hello, people!\n\nI'm having a problem with postgres, and I wanna know if anyone out there \nmight give me some help...\n\nMy problem is this:\n I have a table. When I run a select over it, the result comes OK. BUT, \nif I create a view with the same select, when I run a select on the view, \nif the table has any data my computer explodes on my face! I have made \nother views, and they work fine, but they dont have subqueries with \nreference to its parent query.\n\nHave any of you ever faced this problem, or created a view like mine that \nworked? how do I overcome my troubles??\n\nHere follow my select, my table and all things I did (not much) to \ngenerate the crash.\n\nIf you think you might help me and want more info about my system or \npostgres, mail me.\n\n-------------------------------------------------------------------------\n-------\n[user@server dir]$ psql mydb\nWelcome to POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of PSTGRESQL\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc pgcc-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: mydb\n\nmydb=> CREATE TABLE connection (\n connection_id INT4 primary key,\n connection_owner INT4, -- Foreign Key for users table\n connection_start TIMESTAMP,\n connection_end RELTIME\n);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n'connection_pkey' for table 'connection'\nCREATE\nmydb=> CREATE VIEW connection_last AS\n SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nCREATE\nmydb=> SELECT * FROM connection_last;\nconnection_id|connection_owner|connection_start|connection_end\n-------------+----------------+----------------+--------------\n(0 rows)\n\nmydb=> INSERT INTO connection VALUES (3,'2','22-01-01','122');\nINSERT 172197 1\nmydb=> select * from connection;\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> select * from connection_last;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is \nimpossible.\n Terminating.\n[user@server dir]$ \n-------------------------------------------------------------------------\n-----------\n_____________________________________________________________\nGUNS N' ROSES.\nEles est�o de volta no novo CD Live Era 87-93. Fundamental para entender\na hist�ria do rock dos anos 90. Fundamental comprar logo. S� R$ 20,90 no Submarino.\nPROMO��O MOUSE VERMELHO\nhttp://www.submarino.com.br/default.asp?franq=100037\n\n_____________________________________________________________\n\n\n",
"msg_date": "Fri, 18 Feb 2000 13:33:51 -0300 (EST)",
"msg_from": "\"Mauricio da Silva Barrios\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it a bug?"
}
] |
[
{
"msg_contents": "Hello, people!\n\nI'm having a problem with postgres, and I wanna know if anyone out there \nmight give me some help...\n\nMy problem is this:\n I have a table. When I run a select over it, the result comes OK. BUT, \nif I create a view with the same select, when I run a select on the view, \nif the table has any data my computer explodes on my face! I have made \nother views, and they work fine, but they dont have subqueries with \nreference to its parent query.\n\nHave any of you ever faced this problem, or created a view like mine that \nworked? how do I overcome my troubles??\n\nHere follow my select, my table and all things I did (not much) to \ngenerate the crash.\n\nIf you think you might help me and want more info about my system or \npostgres, mail me.\n\n-------------------------------------------------------------------------\n-------\n[user@server dir]$ psql mydb\nWelcome to POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of PSTGRESQL\n[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc pgcc-2.91.66]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: mydb\n\nmydb=> CREATE TABLE connection (\n connection_id INT4 primary key,\n connection_owner INT4, -- Foreign Key for users table\n connection_start TIMESTAMP,\n connection_end RELTIME\n);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index \n'connection_pkey' for table 'connection'\nCREATE\nmydb=> CREATE VIEW connection_last AS\n SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nCREATE\nmydb=> SELECT * FROM connection_last;\nconnection_id|connection_owner|connection_start|connection_end\n-------------+----------------+----------------+--------------\n(0 rows)\n\nmydb=> INSERT INTO connection VALUES (3,'2','22-01-01','122');\nINSERT 172197 1\nmydb=> select * from connection;\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> SELECT * FROM connection as con_out\n WHERE connection_start IN ( SELECT max(connection_start) FROM \nconnection as con_in\n WHERE con_in.connection_owner = \ncon_out.connection_owner\n GROUP BY connection_owner);\nconnection_id|connection_owner|connection_start |connection_end\n-------------+----------------+----------------------+---------------\n 3| 2|2001-01-22 00:00:00-02|@ 2 mins 2 secs\n(1 row)\n\nmydb=> select * from connection_last;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is \nimpossible.\n Terminating.\n[user@server dir]$ \n-------------------------------------------------------------------------\n-----------\n_____________________________________________________________\nGUNS N' ROSES.\nEles est�o de volta no novo CD Live Era 87-93. Fundamental para entender\na hist�ria do rock dos anos 90. Fundamental comprar logo. S� R$ 20,90 no Submarino.\nPROMO��O MOUSE VERMELHO\nhttp://www.submarino.com.br/default.asp?franq=100037\n\n_____________________________________________________________\n\n\n",
"msg_date": "Fri, 18 Feb 2000 13:33:57 -0300 (EST)",
"msg_from": "\"Mauricio da Silva Barrios\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it a bug?"
}
] |
[
{
"msg_contents": "Hello everyone, \n\nDr. P. Ciaccia, A. Ghidini and I (R. Cornacchia), have recently\ndeveloped, and implemented on PostgreSQL, a class of TOP Queries. \n\nHere I would like to briefly present our main results.\n\nThe proposed sintax is the following:\n\nSELECT\nFROM\nWHERE\nSTOP AFTER <N>\n FOR EACH <stop-grouping attribute list>\n RANK BY <ranking specification>\nORDER BY \n\nwhere both FOR EACH and RANK BY (and STOP AFTER clause as a whole) are\noptional.\n\nHere is an example:\n\n----\n\"Retrieve the 10 highest paid employees for each department.\n Order the results first on department name and then on employee name\"\n\nSELECT Dept.name, Emp.name, Emp.salary\nFROM Emp, Dept\nWHERE Emp.dno = Dept.dno\nSTOP AFTER 10\n FOR EACH Dept.dno\n RANK BY Emp.salary DESC\nORDER BY Dept.name, Emp.name\n----\n\nWe called such a query \"Generalized Top Query\".\n\nThe semantics introduced by the example is derived from the one\nproposed by Carey and Kossmann (\"On Saying 'Enougth Already' in\nSQL\", 1997). The main points of our extension are: \n\n1) You can obtain the <N> Top rows as dictated by the RANK BY\nspecification and then produce the results according to the \nORDER BY specification. This means much more flexibility.\n\n2) You can obtain the Top N rows for each of the \"groups\" which are \nindividuated by the FOR EACH specification \n(from here \"Generalized\" top query).\n\n\nPlease consider that our semantics completely includes the current \nLIMIT clause capabilities, offering at the same time some additional \nones.\n\nFor what concerns PostgreSQL, one of our first issues were to keep\nbasically unchanged\nthe current framework. \n\nHere is a brief list of major changes we operated:\n\n- We provided 2 new physical operators:\n ScanStop: stops the stream to <N> rows \n SortStop: performs an ad-hoc sort on the stream, \n retaining only the <N> top rows. \n\n- The optimizer can operate a push-down in the path-tree of \n those two operators. It means we reduce the \n stream cardinality as soon as possible, leading to a great\n improvement on the performances of the subsequent operators.\n\n- A larger number of optimizable operators force the optimizer to\n generate more\tplans to handle,\n so we extended the operators properties and the pruning rules.\n\n- We extended the cost model as well, introducing estimates for the\n cost of producing the FIRST N tuples.\n\n- The rules involved in the Stop operators placement are mainly\n based upon referential integrity constraints. In this way we \n provided a *temporary* solution to this Postgres lack, storing\n the informations on constraints in two new system catalogs.\n\n- The FOR EACH generalization leads to natural generalization\n of all the above concepts.\n\n- The evaluation of GROUP BY clause can be performed before and after\n the STOP AFTER, and both makes sense. In order to optimize the Stop\n After operator, we choose to evaluate it before the group by \n (and the order by).\n\n\n* Let us summarize the current state of our work. * \n\n- Our extension is highly optimized and can lead to performance\n improvements of several orders of magnitude in comparison with the\n current \"LIMIT approach\"\n- It has a low impact on the original PostgreSQL code\n- It does not affect the usual processing of classical queries\n- It is soon expected to work with views. \n- It works with subqueries\n- It works with cursors\n- It efficently makes use of indices\n- It is updated to a 6.6 snapshot of November 1999 (we are waiting \n for a more stable release)\n\n--------------------------------------------------------------------\n\nMore details on this subject are going to be presented in a\nforthcoming paper, available if interested. \n\nWe would be glad to receive your comments on this work. \n\nBest regards, \n\nR. Cornacchia ([email protected]) Computer Science, University of\nBologna\n\nA. Ghidini ([email protected]) Computer Science, University of Bologna \n\nDr. Paolo Ciaccia ([email protected]) DEIS CSITE-CNR, University of\nBologna \n\n\n\n",
"msg_date": "Sat, 19 Feb 2000 02:23:38 +0100 (MET)",
"msg_from": "Roberto Cornacchia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Generalized Top Queries on PostgreSQL"
}
] |
[
{
"msg_contents": "I've been trying to figure out how postgresql handles multiple open file\ndescriptors. First, I have managed to find out postgresql keeps a pool of\navailable descriptors for general processing tasks like sorting for\nexample. Second, I have tried to find what kind of data structure was used\nfor open descriptors to tables and indices, but couldn't find where. Could\nsomeone please let me know what kind of data structure(s) is/are used for\nopen file descriptors and where this is located in the code?\n\nThe reason I'm so curious about such a specific part of the code is that\nthis problem has often occured in my own source code. In the past, I have\nused a linked list and contemplated using a hash table to manage multiple\nopen file descriptors. I'm therefore interested to find out what real\nproduction systems use for this kind of problem.\n\nRegards,\nMarc Tardif\n\n",
"msg_date": "Sat, 19 Feb 2000 01:26:11 +0000 (GMT)",
"msg_from": "Marc Tardif <[email protected]>",
"msg_from_op": true,
"msg_subject": "handling multiple file descriptors"
},
{
"msg_contents": "Marc Tardif <[email protected]> writes:\n> I've been trying to figure out how postgresql handles multiple open file\n> descriptors. First, I have managed to find out postgresql keeps a pool of\n> available descriptors for general processing tasks like sorting for\n> example. Second, I have tried to find what kind of data structure was used\n> for open descriptors to tables and indices, but couldn't find where. Could\n> someone please let me know what kind of data structure(s) is/are used for\n> open file descriptors and where this is located in the code?\n\nSee src/backend/storage/file/fd.c. You might also find buffile.c,\nin the same directory, of interest.\n\nThese modules are not simply concerned with managing kernel FDs, but\nalso with releasing resources during transaction abort. Postgres'\nmodel of error recovery is that elog(ERROR) longjmps back to the main\nserver loop, so routines that were aborted out of don't get to close\nfiles, free memory, or otherwise release resources. fd.c is responsible\nfor cleaning up open FDs and temporary files after that happens.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 02:29:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] handling multiple file descriptors "
}
] |
[
{
"msg_contents": "I have added new backslash command \\eset and \\eshow to psql.\n(only enabled if --enable-multibyte specified)\nModified files are help.c and command.c. \n\n-------------------------------------------------------------------\no \\eset <encoding>\n\nchange the client encoding to <encoding>. This is actually\nPQsetClientEncoding + pset.encoding (psql internal data) change. Now\nuser can change the client side encoding on the fly, that is not\npossible before 7.0.\n\no \\eshow\n\nshow the client encoding.\n-------------------------------------------------------------------\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 19 Feb 2000 14:14:02 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "new backslah command of psql"
},
{
"msg_contents": "On 2000-02-19, Tatsuo Ishii mentioned:\n\n> I have added new backslash command \\eset and \\eshow to psql.\n> (only enabled if --enable-multibyte specified)\n> Modified files are help.c and command.c. \n\nNext time, make sure to update the documentation as well.\n\n> o \\eset <encoding>\n> \n> change the client encoding to <encoding>.\n\n> o \\eshow\n> \n> show the client encoding.\n\nI took the liberty to change that to \\encoding <x> sets the encoding and\n\\encoding without arguments shows it. Also you can do \\echo :ENCODING.\nThat fits in better with the rest.\n\nI have a question for you, though. Right now, when I have a non-multibyte\nbackend and a multibyte psql I get this when I start up psql:\n\npsql: ERROR: MultiByte support must be enabled to use this function\n\nThat means I can't use psql on non-multibyte servers in that case.\n(Probably true for other libpq applications, too.) I don't think that's\nacceptable. Is there anything you can do there, such as the backend\nignoring whatever function is being called?\n\nI believe you are going a little too far with ifdef'ing out MULTIBYTE. For\ninstance, it should be perfectly fine to call pg_char_to_encoding, even if\nthere's no possibility of using the encoding. Even when I don't configure\nfor multibyte support I should be able to use all (or at least most) of\nthe functions and get SQL_ASCII or 0 or some such back.\n\nI will be interested to work with you and Thomas (and who knows else) on\nthe national character and related issues for the next release. Some of\nthis stuff needs a serious look.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sun, 20 Feb 2000 03:34:38 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new backslah command of psql"
},
{
"msg_contents": "> On 2000-02-19, Tatsuo Ishii mentioned:\n> \n> > I have added new backslash command \\eset and \\eshow to psql.\n> > (only enabled if --enable-multibyte specified)\n> > Modified files are help.c and command.c. \n> \n> Next time, make sure to update the documentation as well.\n\nOk.\n\n> > o \\eset <encoding>\n> > \n> > change the client encoding to <encoding>.\n> \n> > o \\eshow\n> > \n> > show the client encoding.\n> \n> I took the liberty to change that to \\encoding <x> sets the encoding and\n> \\encoding without arguments shows it.\n\nOk.\n\n>Also you can do \\echo :ENCODING.\n> That fits in better with the rest.\n\nOh, I didn't know that.\n\n> I have a question for you, though. Right now, when I have a non-multibyte\n> backend and a multibyte psql I get this when I start up psql:\n> \n> psql: ERROR: MultiByte support must be enabled to use this function\n> \n> That means I can't use psql on non-multibyte servers in that case.\n\n> (Probably true for other libpq applications, too.) I don't think that's\n> acceptable. Is there anything you can do there, such as the backend\n> ignoring whatever function is being called?\n>\n> I believe you are going a little too far with ifdef'ing out MULTIBYTE. For\n> instance, it should be perfectly fine to call pg_char_to_encoding, even if\n> there's no possibility of using the encoding. Even when I don't configure\n> for multibyte support I should be able to use all (or at least most) of\n> the functions and get SQL_ASCII or 0 or some such back.\n\nI can hardly imagine the case where multibyte-enabled/non-multibyte\ninstallations are mixed together. Anyway, we could enable part of\nmultibyte functions even if it not configured. But is it worth to do\nthat?\n \nI personally think that MULTIBYTE should always be enabled since it is\n\"upper compatible\" to non-MB installations and no significant\nperformance penalty is observed (I am not sure about what other core\ndevelopers are thinking, though).\n\nMoreover, we are going to implement the national character etc. in the\nnear future and the current multibyte implementations will be\ndeprecated soon.\n\n> I will be interested to work with you and Thomas (and who knows else) on\n> the national character and related issues for the next release. Some of\n> this stuff needs a serious look.\n\nYes, especially to introduce CREATE CHARACTER SET, current MB stuffs\nmust be completely rewritten.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 20 Feb 2000 13:15:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] new backslah command of psql"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-02-19, Tatsuo Ishii mentioned:\n> \n> > I have added new backslash command \\eset and \\eshow to psql.\n> > (only enabled if --enable-multibyte specified)\n> > Modified files are help.c and command.c. \n\nDo we have to have a \\eset command? Can't we just use \\set that we\nalready have?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Feb 2000 23:58:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new backslah command of psql"
},
{
"msg_contents": "[spelling mistake in Subject corrected]\n> Do we have to have a \\eset command? Can't we just use \\set that we\n> already have?\n\nNot sure. Seems \\set is for just setting a variable. To change the\nclient encoding, we need to do more.\n--\nTatsuo Ishii\n\n",
"msg_date": "Sun, 20 Feb 2000 17:25:10 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] new backslash command of psql"
},
{
"msg_contents": "> [spelling mistake in Subject corrected]\n> > Do we have to have a \\eset command? Can't we just use \\set that we\n> > already have?\n> \n> Not sure. Seems \\set is for just setting a variable. To change the\n> client encoding, we need to do more.\n\nWe already have some special variable meanings like \\set PROMPT1 which\ncontrols the psql prompt. Seems an encoding variable can be made too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 20 Feb 2000 16:50:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new backslash command of psql"
},
{
"msg_contents": "On 2000-02-20, Bruce Momjian mentioned:\n\n> We already have some special variable meanings like \\set PROMPT1 which\n> controls the psql prompt. Seems an encoding variable can be made too.\n\nSetting the encoding actually has to *do* something though. The prompt\nwill just be stored and read out next time it's needed. I guess one could\ninclude hooks into the set variable command, but I figured multibyte as we\nknow it is going away soon anyway ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 21 Feb 2000 20:48:08 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new backslash command of psql"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-02-20, Bruce Momjian mentioned:\n> \n> > We already have some special variable meanings like \\set PROMPT1 which\n> > controls the psql prompt. Seems an encoding variable can be made too.\n> \n> Setting the encoding actually has to *do* something though. The prompt\n> will just be stored and read out next time it's needed. I guess one could\n> include hooks into the set variable command, but I figured multibyte as we\n> know it is going away soon anyway ...\n> \n\nOh, I though it didn't need hooks. Never mind.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 14:58:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new backslash command of psql"
}
] |
[
{
"msg_contents": "OK, I've put in fixes to get Jan up and running on column foreign\nkeys. The current fix forces NOT NULL arguments to be near the\nbeginning of a column constraint list, and enforces the SQL92\nrequirement that the DEFAULT clause occur nearly first in a column\nconstraint.\n\nAs Jan probably already knows, the shift/reduce conflicts all happened\nas a result of NOT NULL and NOT DEFERRABLE clauses; removing either\neliminated the conflicts.\n\nI poked at it for *hours*, and have not yet stumbled on the correct\nlayout to give full flexibility while allowing the new constraint\nattributes. Jan was thinking that he needed some token lookahead to do\nthis, but I'll be suprised if that is required to solve this for the\nSQL92 case: it would be the first and only instance of syntax which\ncan not be solved by our yacc parser and istm that the spec would try\nto stay away from that. The successful technique for fixing this will\nlikely involve unrolling more clauses to allow yacc to juggle more\nclauses simultaneously before forcing a shift/reduce operation.\n\nI'm leaving town through next weekend (back the 28th) and can pick\nthis up for more work then.\n\nbtw, regression tests pass except for the rules test with known\nformatting differences.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 19 Feb 2000 08:34:11 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "gram.y foreign keys"
},
{
"msg_contents": "Let me know if you want to discussed this over the phone for ideas.\n\n\n> OK, I've put in fixes to get Jan up and running on column foreign\n> keys. The current fix forces NOT NULL arguments to be near the\n> beginning of a column constraint list, and enforces the SQL92\n> requirement that the DEFAULT clause occur nearly first in a column\n> constraint.\n> \n> As Jan probably already knows, the shift/reduce conflicts all happened\n> as a result of NOT NULL and NOT DEFERRABLE clauses; removing either\n> eliminated the conflicts.\n> \n> I poked at it for *hours*, and have not yet stumbled on the correct\n> layout to give full flexibility while allowing the new constraint\n> attributes. Jan was thinking that he needed some token lookahead to do\n> this, but I'll be suprised if that is required to solve this for the\n> SQL92 case: it would be the first and only instance of syntax which\n> can not be solved by our yacc parser and istm that the spec would try\n> to stay away from that. The successful technique for fixing this will\n> likely involve unrolling more clauses to allow yacc to juggle more\n> clauses simultaneously before forcing a shift/reduce operation.\n> \n> I'm leaving town through next weekend (back the 28th) and can pick\n> this up for more work then.\n> \n> btw, regression tests pass except for the rules test with known\n> formatting differences.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Feb 2000 04:07:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gram.y foreign keys"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I poked at it for *hours*, and have not yet stumbled on the correct\n> layout to give full flexibility while allowing the new constraint\n> attributes. Jan was thinking that he needed some token lookahead to do\n> this, but I'll be suprised if that is required to solve this for the\n> SQL92 case: it would be the first and only instance of syntax which\n> can not be solved by our yacc parser and istm that the spec would try\n> to stay away from that. The successful technique for fixing this will\n> likely involve unrolling more clauses to allow yacc to juggle more\n> clauses simultaneously before forcing a shift/reduce operation.\n\nThe argument for adding a token lookahead wasn't that it is impossible\nto do it at the grammar level; it was that it looked a lot simpler,\nmore understandable, and more robust/maintainable to do it as a token\nfilter than by grammar-unrolling.\n\nIf you've spent hours on it and can't find any better solution than\nrestricting the order of column constraint clauses, I'd say that kind\nof proves the point, no?\n\nThe other way we had discussed of attacking it was to postpone some\nprocessing to analyze.c (see thread around 10-Dec).\n\nAnyway, IMNSHO we can't release with an artificial restriction on\ncolumn constraint clause order; it's way too likely to break existing\ncode, even if it is within the letter of the SQL spec. This'll do for\nbeta testing but we need something better before 7.0 release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 11:52:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] gram.y foreign keys "
}
] |
[
{
"msg_contents": "Looks nice. Should we put your code in contrib/ or put the URL on our\nweb site?\n\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> \n> Hello Bruce:\n> \n> I'm the main author of UESQLC, an embedded SQL compiler that, (now) can generate\n> code for ODBC, Oracle OCI and PostgreSQL LIBPQ interfaces. It parses SQL92, and\n> generates the code for the target selected (one of the three main targets\n> supported) using a SGML based target code description. I have attached an\n> example of C++ with embedded SQL (UESQL). It is under the GPL and this is the\n> URL:\n> \n> http://uesqlc.dedalo-ing.com/\n> \n> I hope that this program could help you.\n> \n> Greets.\n> Rafa.\n> - -- \n> +----------\n> | Rafael Jesus Alcantara Perez. P.O. Box 1199, 29080 Malaga, SPAIN.\n> | Email: mailto:[email protected]\n> | PGP public key: http://www.dedalo-ing.com/~rafa/public-key.asc\n> +---------------------\n> \"For every complex problem there is a solution that is concise, clear, simple, and wrong.\"\n> (H. L. Mencken)\n> \n> -----BEGIN PGP SIGNATURE-----\n> Version: 2.6.3i\n> Charset: noconv\n> \n> iQEVAwUBOK59o9qA/MQ7nrK9AQFlAAf8D1KP0xUOWV5uOOG671QBhJsyimO+mevC\n> Dw1m/7+EgfOnlgYtIKtB/AQIy1vayVFASnP9fD/udKTXYWWYXFaEGUScHwJpJZj0\n> 2TgAZdhjGwaUPnjpizQM6By8bs0bI7s0ZgL8SQw38k0YOZPC+4xCg7UvQsDieR+5\n> 5lDZgbgF4Mdls79R6bSBUHZp0lLZkdvsL5V8OsstFY4CI+BXyo1C1RtYpipN7w+N\n> dlLCblw7rGz6ohS7BtUU8GkJzGrznDaHz9dmm/7IVRwa8ovUnBVxlRlZCKY5SnGJ\n> 6N7xj9HT1QiNsNqpzE8KZFDrUsI290K4Gc55GKhGg8VCes1Z+FIjBg==\n> =rDh5\n> -----END PGP SIGNATURE-----\nContent-Description: UESQL in C++\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 19 Feb 2000 07:01:55 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UESQLC"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Looks nice. Should we put your code in contrib/ or put the URL on our\n> web site?\n\nThe GPL license would pose a problem for including it in our\ndistribution, I think (GPL vs BSD and all that) --- but no reason\nnot to link to it from our web pages...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 11:55:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UESQLC "
},
{
"msg_contents": "On Sat, Feb 19, 2000 at 11:55:22AM -0500, Tom Lane wrote:\n> The GPL license would pose a problem for including it in our\n> distribution, I think (GPL vs BSD and all that) --- but no reason\n> not to link to it from our web pages...\n\nI accidently deleted the original mail before reading it. Could someone\nplease give me the link. What exactly is UESQLC?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Sun, 20 Feb 2000 11:03:52 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UESQLC"
},
{
"msg_contents": "\nOn Sat, 19 Feb 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > Looks nice. Should we put your code in contrib/ or put the URL on our\n> > web site?\n> \n> The GPL license would pose a problem for including it in our\n> distribution, I think (GPL vs BSD and all that) --- but no reason\n> not to link to it from our web pages...\n> \n\nIn the current contrib are any matters with GPL\n(from Massimo Dal Zotto).\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 21 Feb 2000 10:54:31 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: UESQLC "
},
{
"msg_contents": "On Sat, Feb 19, 2000 at 12:17:16PM +0100, Rafael Jesus Alcantara Perez wrote:\n> I'm the main author of UESQLC, an embedded SQL compiler that, (now) can g=\n> ...\n> example of C++ with embedded SQL (UESQL). It is under the GPL and this is=\n\nAs you might imagine I'm interested in this. After all I maintain ECPG the\nembedded SQL preprocessor for C distributed with PostgreSQL. So I tried\nlooking into your source, but I cannot get mpcl to compile:\n\ndia2dfaml.cc: In function \u0016oid _CheckIntegrity()':\ndia2dfaml.cc:180: using `typename' outside of template\ndia2dfaml.cc: In function \u0016oid\n_ParseUmlClass(mpcl::TRegularExpressionMatcher &, const string &)':\ndia2dfaml.cc:421: using `typename' outside of template\ndia2dfaml.cc: In function \u0016oid _ResolveStates()':\ndia2dfaml.cc:467: using `typename' outside of template\ndia2dfaml.cc:468: using `typename' outside of template\ndia2dfaml.cc: In function \u0016oid _WriteActions()':\ndia2dfaml.cc:502: using `typename' outside of template\ndia2dfaml.cc: In function \u0016oid _WriteDfaml(const string &)':\ndia2dfaml.cc:520: using `typename' outside of template\ndia2dfaml.cc: In function \u0016oid _WriteTransitions(const string &)':\ndia2dfaml.cc:592: using `typename' outside of template\nmake[3]: *** [dia2dfaml.o] Error 1\n\nI hvae not found the time to dig into it myself.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 22 Feb 2000 11:10:52 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UESQLC"
}
] |
[
{
"msg_contents": "I tried today for the first time to compile plperl, and didn't have\nmuch success. After fixing a couple of simple problems, I was left\nwith\n\ncc -c -D_HPUX_SOURCE -Aa -I/usr/local/include -I/opt/perl5/lib/5.00503/PA-RISC2.0/CORE +z -I../../../src/interfaces/libpq -I../../../src/include -I../../../src/backend plperl.c\ncpp: \"perl.h\", line 136: warning 2001: Redefinition of macro VOIDUSED.\ncpp: \"perl.h\", line 1474: warning 2001: Redefinition of macro DEBUG.\ncc: \"../../../src/include/utils/int8.h\", line 34: error 1681: Must use +e or -Ae for long long in ANSI mode.\nmake: *** [plperl.o] Error 1\n\nThis is with a plain-vanilla installation of perl 5.005_03 on HPUX 10.20.\nPerl's configure script chooses HP's cc in strict-ANSI (-Aa) mode,\nand I let it have its head on the issue. I could work around it by\nreinstalling Perl using gcc and/or forcing -Ae (not-so-strict ANSI mode)\nin Perl's installation CFLAGS, but if I'm running into this problem with\nthe standard setup then so will a lot of other people on HPUX. I don't\nthink we can say \"you have to have a nonstandard Perl installation to\nuse this\".\n\nBut short of that I don't see a clean answer. We select -Ae in the\nhpux_cc template, but I usually don't use the hpux_cc template ---\nI prefer hpux_gcc for development. (In fact, the first problem I had to\nfix was that plperl's makefile tried to use CFLAGS taken from postgres's\nconfiguration with CC taken from perl's. HP's cc does not like gcc-\nspecific compiler switches, nor vice versa.) So there's noplace for\nplperl to cleanly pull -Ae from.\n\nThe only thing I can think of at the moment is to do something like\n\nif (platform-is-HPUX-and-CC-is-cc)\nCFLAGS+= -Ae\nendif\n\nin plperl's Makefile.PL, but that sure strikes me as awfully ugly.\nAnyone have a better answer?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Feb 2000 14:21:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Nasty portability glitch in plperl"
}
] |
[
{
"msg_contents": "Jeroen van Vianen <[email protected]> writes:\n>> Does this work with a non-bison parser? It looks mighty\n>> bison-dependent to me...\n\n> I'm not sure, but it probably is flex dependent (but Postgres always needed \n> flex anyway). I'm not aware of any yacc / byacc / bison dependencies. Don't \n> know if anybody has been successful building Postgres with another parser \n> generator.\n\nUm, you're right of course --- those are lexer not parser datastructures\nyou're poking into. Sorry for my confusion.\n\nWe do in fact work with non-bison parser generators, or did last time\nI tried it (around 6.5 release). I would not like us to stop working\nwith non-bison yaccs, since bison's output depends on alloca() which\nis not available everywhere.\n\nI'm not sure about the situation with lexers. We have been saying for\na long time that flex was required, but since we got rid of the\nscanner's use of trailing context ('/' rules) I think there is a better\nchance that it would work with vanilla lex. Anyone want to try that\nwith current sources?\n\n> BTW, as we ship flex's output lex.yy.c (as scan.c) and bison's output\n> (gram.c) in the distribution, any user would be able to compile the\n> sources, but if they want to start hacking the .l or .y files, they'll\n> need appropriate tools.\n\nRight. I am not aware of any portability problems with flex's output\nas there are with bison's, so it may be that the concern is moot.\nWe may just be able to say \"use the prebuilt scan.c or get flex; we\ndon't care about supporting vendor lexes anymore\".\n\nI do see a potential problem with this patch that's not related to\nportability questions; it is that you're assuming that the lexer's\nfurthest penetration into the source text is a good place to point\nat for parser errors. That may not be true always. In particular,\nI've been advocating solving some other problems by inserting a\none-token lookahead buffer between the parser and the lexer. If that\nhappens then you'd be off by (at least) one token in some cases.\n\nI think the way that this sort of thing is customarily handled in\n\"real\" compilers is that each token carries along an indication of\njust where it was found in the source, and then error messages can\nfinger the right place without making assumptions about synchronization\nbetween different phases of the scanning/parsing process. That might\nbe more work than we can justify for SQL queries; not sure.\n\nBTW, I think that the immediate problem of generating a good error\nmessage for unterminated comments and literals could be solved in other\nways. This patch or something like it might be cool anyway, but you\nshould bear in mind that printing out a query and then a marker that's\nsupposed to line up with something in the query doesn't always work\nall that well. Consider a query that's several dozen lines long,\nsuch as a large table definition. If we had more control over the\nuser interface and could highlight the offending token directly,\nI'd be more excited about doing something like this. (Actually, you\ncould partially address that problem by only printing one line's worth\nof query text leading up to the error marker point. It would still be\ntricky to get it right in the presence of newlines, tabs, etc.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Feb 2000 20:03:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Patch for more readable parse error messages "
},
{
"msg_contents": "Tom Lane wrote:\n\n> We do in fact work with non-bison parser generators, or did last time\n> I tried it (around 6.5 release). I would not like us to stop working\n> with non-bison yaccs, since bison's output depends on alloca() which\n> is not available everywhere.\n\nI think GNU alloca should work on any platform because it's written in a\nportable way.\n",
"msg_date": "Mon, 21 Feb 2000 14:40:54 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "At 20:03 20-02-00 -0500, Tom Lane wrote:\n>I do see a potential problem with this patch that's not related to\n>portability questions; it is that you're assuming that the lexer's\n>furthest penetration into the source text is a good place to point\n>at for parser errors. That may not be true always. In particular,\n>I've been advocating solving some other problems by inserting a\n>one-token lookahead buffer between the parser and the lexer. If that\n>happens then you'd be off by (at least) one token in some cases.\n\nThat's true, but the '*' indicator might at least indicate the approximate \nlocation of the error. I'm not aware of many (programming) languages that \nare able to indicate the error at the correct location all the time, anyway.\n\n>I think the way that this sort of thing is customarily handled in\n>\"real\" compilers is that each token carries along an indication of\n>just where it was found in the source, and then error messages can\n>finger the right place without making assumptions about synchronization\n>between different phases of the scanning/parsing process. That might\n>be more work than we can justify for SQL queries; not sure.\n\nTrue, but requires a lot more work.\n\n>BTW, I think that the immediate problem of generating a good error\n>message for unterminated comments and literals could be solved in other\n>ways. This patch or something like it might be cool anyway, but you\n>should bear in mind that printing out a query and then a marker that's\n>supposed to line up with something in the query doesn't always work\n>all that well. Consider a query that's several dozen lines long,\n>such as a large table definition. If we had more control over the\n>user interface and could highlight the offending token directly,\n>I'd be more excited about doing something like this. (Actually, you\n>could partially address that problem by only printing one line's worth\n>of query text leading up to the error marker point. It would still be\n>tricky to get it right in the presence of newlines, tabs, etc.)\n\nI try to make a good guess at where the location of the error is, but am \nhesitant to only print a few tokens near the error locations, as you won't \nbe able to know where the error was found in complex queries or table \ndefinitions. Please try with more complex queries and tell me what you think.\n\n\nJeroen\n\n",
"msg_date": "Mon, 21 Feb 2000 10:07:11 +0100",
"msg_from": "Jeroen van Vianen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch for more readable parse error messages "
},
{
"msg_contents": "On 2000-02-20, Tom Lane mentioned:\n\n> I would not like us to stop working\n> with non-bison yaccs, since bison's output depends on alloca() which\n> is not available everywhere.\n\nCouldn't alloca(x) be defined to palloc(x) where missing? The performance\nwill be worse, but it ought to work.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 00:57:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-02-20, Tom Lane mentioned:\n>> I would not like us to stop working\n>> with non-bison yaccs, since bison's output depends on alloca() which\n>> is not available everywhere.\n\n> Couldn't alloca(x) be defined to palloc(x) where missing?\n\nProbably, but I wasn't looking for a workaround; that was just one\nquick illustration of a reason not to want to use bison (one that's\nbitten me personally, so I knew it offhand). We should try not to\nbecome dependent on bison when there are near-equivalent tools, just\non general principles of maintaining portability. For an analogy,\nI believe most of the developers use gcc, but it would be a real bad\nidea for us to abandon support for other compilers.\n\nFor the same sort of reasons I'd prefer that our scanner worked\nwith vanilla lex, not just flex. I'm not sure how far away we are\nfrom that; it may be an unrealistic goal. But if it is within reach\nthen we shouldn't give it up lightly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 22:56:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > On 2000-02-20, Tom Lane mentioned:\n> >> I would not like us to stop working\n> >> with non-bison yaccs, since bison's output depends on alloca() which\n> >> is not available everywhere.\n> \n> > Couldn't alloca(x) be defined to palloc(x) where missing?\n> \n> Probably, but I wasn't looking for a workaround; that was just one\n> quick illustration of a reason not to want to use bison (one that's\n> bitten me personally, so I knew it offhand). We should try not to\n> become dependent on bison when there are near-equivalent tools, just\n> on general principles of maintaining portability. For an analogy,\n> I believe most of the developers use gcc, but it would be a real bad\n> idea for us to abandon support for other compilers.\n\nBut I don't see non-bison solutions for finding the location of errors. \nIs it possible? Could we enable the feature just for bison?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 23:08:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "At 10:56 PM 2/21/00 -0500, Tom Lane wrote:\n>Probably, but I wasn't looking for a workaround; that was just one\n>quick illustration of a reason not to want to use bison (one that's\n>bitten me personally, so I knew it offhand). We should try not to\n>become dependent on bison when there are near-equivalent tools, just\n>on general principles of maintaining portability. For an analogy,\n>I believe most of the developers use gcc, but it would be a real bad\n>idea for us to abandon support for other compilers.\n>\n>For the same sort of reasons I'd prefer that our scanner worked\n>with vanilla lex, not just flex. I'm not sure how far away we are\n>from that; it may be an unrealistic goal. But if it is within reach\n>then we shouldn't give it up lightly.\n\nI agree entirely with the above. The more portable the tool, the larger\nthe potential user base. Unless the goal is to bundle-up Postgres with\na pre-defined set of software, i.e. GNU in this case (despite the fact\nthat I don't see Postgres on their site as part of their list of open-source\nsoftware, and I think I looked twice), go for the cover-the-earth approach.\n\nSQL syntax isn't particularly difficult. On the other hand, I realize there's\na legacy to support. Still, making portions of the product dependent on one\ntool or another is an issue that merits close scrutiny. Shouldn't be done \nexcept under compelling reasons.\n\nI mean, presuming a reasonably modern C, C tools, and large-scale \noperating-system environment makes sense (no reason to run native on a palm\npilot,\nat this point). But unecessary dependence on particular tools when not\nnecessary doesn't make much sense.\n\nJust IMO, of course.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 21:05:51 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse\n\terror messages"
},
{
"msg_contents": "At 00:56 22-02-00 +0100, Peter Eisentraut wrote:\n>On 2000-02-20, Jeroen van Vianen mentioned:\n>\n> > The format of the error messages is changed to:\n> >\n> > jeroen=# create abc ( a int4, b int4 );\n> > ERROR: parser: parse error at or near \"abc\":\n> > create abc ( a int4, b int4 )\n> > *\n>\n>I believe this is the wrong approach because it's highly psql specific. If\n>you use PHP or JDBC or something not character cell based you will get\n>misleading output.\n>\n>You might want to start thinking about putting a little more information\n>into an ERROR than just a text string, such as error codes or\n>supplementary information like this. psql could then choose to print a\n>star, other interfaces might set the cursor to the specified place, etc.\n>Much more flexible.\n\nGood idea. As far as I understand things, libpq uses special datastructures \nto access the error code and message and it's up to the application (psql, \nand others) to do what it wants to do with it (let's say print the error). \nThese structures might be enhanced with an error location, but this might \nbe breaking things. And my question is how to do this.\n\nNote that this location part is only filled now when yyerror() throws an \nerror, but other parts of the backend might use a similar approach. OTOH it \nmight be nice then to have every token know its location in the query \nstring (as Don suggested), so you might end up with error messages like:\n\nmydb-> select * from t1, t2 where ...\nERROR: table t2 not found:\nselect * from t1, t2 where ...\n ^\n\n(which may be nice, or not).\n\nWhat I see now is something like this (for psql):\n\npsql sends a query\npsql reads response\n if response is error\n get error location and find context in which error occurred\n print error message, with error location and context\n otherwise\n do what it used to do\n\nand for the other interfaces nothing changes.\n\nThis is something I might be able to implement for 7.1.\n\nWhat do you think?\n\n\nJeroen\n\n",
"msg_date": "Tue, 22 Feb 2000 10:55:13 +0100",
"msg_from": "Jeroen van Vianen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch for more readable parse error messages"
},
{
"msg_contents": "On Mon, 21 Feb 2000, Tom Lane wrote:\n\n> For the same sort of reasons I'd prefer that our scanner worked\n> with vanilla lex, not just flex. I'm not sure how far away we are\n> from that; it may be an unrealistic goal. But if it is within reach\n> then we shouldn't give it up lightly.\n\nI concur. Somewhere in between vanilla lex and flex is also POSIX lex,\nwhich does support exclusive start conditions but no <<EOF>>.\n\nAnyone for getting rid of GNU make?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 13:18:25 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ... Somewhere in between vanilla lex and flex is also POSIX lex,\n> which does support exclusive start conditions but no <<EOF>>.\n\nI noticed that in the flex manual. Does it help us any? That is,\nare there a useful number of lexes out there that do the full POSIX\nspec? If flex is our only real choice for exclusive start conditions\nthen it's pointless to avoid <<EOF>>.\n\n> Anyone for getting rid of GNU make?\n\nNo ;-). GNU make has enough important features that there is no\nnear-equivalent non-GNU make. VPATH, for example. One thing I hope\nwe will be able to do sometime soon is build in an object directory\ntree separate from the source tree... can't realistically do that\nwith any non-GNU make that I've heard of.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 10:50:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "Jeroen van Vianen <[email protected]> writes:\n> What I see now is something like this (for psql):\n\n> psql sends a query\n> psql reads response\n> if response is error\n> get error location and find context in which error occurred\n> print error message, with error location and context\n> otherwise\n> do what it used to do\n\n> and for the other interfaces nothing changes.\n\n> This is something I might be able to implement for 7.1.\n\nThis looks much better to me than doing it in the backend. What still\nneeds a little thought is how to send back the error location from\nbackend to client app.\n\nI'd be inclined to say that the location info should be imbedded as\ntext in the existing textual error message, rather than trying to add\na separate message with a machine-readable location value. The first\nway is much less likely to create compatibility problems with old client\napps. One way to do it is to say that if the last line of the error\nmessage looks like\n\nError-location: nnn\n\nthen libpq should recognize that, strip the line out of the saved\ntextual error message, and make the location value available through\na new API call.\n\nThe reason I suggest a label is that we could further extend this\nprotocol to handle some other things that people have been griping\nabout for a long time: providing identifying error code numbers that\nclient code could rely on instead of trying to match against the error\ntext, and separating out the info about which routine generated the\nerror (which is mighty handy for backend debugging but is useless\ninfo for Joe Average user). Someday the message being sent back\nmight look less like\n\nERROR: relation_info: Relation 12345 not found\n\nand more like\n\nERROR: Failed to find relation OID 12345 in system catalogs\nError-code: 4242\nReporting-routine: relation_info, plancat.c line 543\n\nof which only the first line is really meant for the user.\n\nOf course, making that happen will be a lot of work, and I'm not\nasking you to volunteer for it. But what you do now should fit\nin with further development of the error handling stuff...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 11:12:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "On 2000-02-22, Tom Lane mentioned:\n\n> > Anyone for getting rid of GNU make?\n> \n> No ;-). GNU make has enough important features that there is no\n> near-equivalent non-GNU make. VPATH, for example.\n\nThere are other makes that support this too. While I love GNU make, too,\nall the talk about allowing vanilla lex, etc. is pointless while GNU make\nis required. Users don't see lex at all, they do see make.\n\nOTOH, it is very hard for me to get an overview these days what's actually\nout there in terms of other make's, other lex's, other yacc's, other\ncompilers. You should have an edge there (HPUX and all). Most\ninstallations of commercial Unix vendors I get to nowadays use gcc, gmake,\nflex as system tools. Yesterday I read that Sun builds Java proper with\nGNU make!\n\nThe best way of going about this seems to take one of the perpetrators\n(make file, gram.y, etc.) and try to port it to some given non-GNU tool\nand take a look at the consequences. For example, if we get PostgreSQL to\ncompile with FreeBSD's make without crippling everything, that would be a\nwin for the user base. This may in fact be the first experiment.\n\n> One thing I hope we will be able to do sometime soon is build in an\n> object directory tree separate from the source tree... can't\n> realistically do that with any non-GNU make that I've heard of.\n\nI'm planning to work on that for 7.1. But here's an interesting tidbit:\nAutomake does support this feature but in its manual it claims that it\ndoes not use any GNU make specific features. And in fact, VPATH exists in\nboth System V's and 4.3 BSD's make.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 23 Feb 2000 02:20:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-02-22, Tom Lane mentioned:\n>>>> Anyone for getting rid of GNU make?\n>> \n>> No ;-). GNU make has enough important features that there is no\n>> near-equivalent non-GNU make. VPATH, for example.\n\n> There are other makes that support this too. While I love GNU make, too,\n> all the talk about allowing vanilla lex, etc. is pointless while GNU make\n> is required. Users don't see lex at all, they do see make.\n\nHuh? Assuming someone will have program X installed is not the same as\nassuming they will have program Y installed. In this particular case,\na more exact way of putting it is that assuming program X is installed\nis not the same as assuming that program Y's prebuilt-on-another-machine\noutput is usable on this platform.\n\n> OTOH, it is very hard for me to get an overview these days what's actually\n> out there in terms of other make's, other lex's, other yacc's, other\n> compilers.\n\nNot much. The real problem here is \"what set of tool features do you\nassume you have, and what's it costing you in portability?\" GNU make\nprovides a very rich feature set that's widely portable, although you\ndo have to port the particular implementation. If you don't want to\nassume GNU make but just a generic make, there's a big gap in features\nbefore you drop down to what's actually portable to a wide class of\nvendor-provided makes. VPATH, for example, does exist in *some*\nvendor makes, but as a practical matter if you use it then you'd better\ntell people \"my program requires GNU make\". It's not worth the trouble\nto keep track of the exceptions.\n\nI will be the first to admit this is all a matter of judgment calls\nrather than certainties. As far as I can see, it's not worth our\ntrouble to try to operate with non-GNU makes; it is worth the trouble\nto work with non-GNU yaccs, because we're not really using any bison-\nspecific features; it's looking like we should forget about non-GNU\nlexes, but I'm not quite convinced yet. You're free to hold different\nopinions of course. I've been around for a few years in the portable-\nsoftware game, so I tend to think I know where the minefields are, but\nperhaps my hard experiences are out of date.\n\n> The best way of going about this seems to take one of the perpetrators\n> (make file, gram.y, etc.) and try to port it to some given non-GNU tool\n> and take a look at the consequences.\n\nBut that only tells you about the one tool; in fact, only about the one\nversion of the one tool that you test. In practice, useful knowledge\nin this area comes from the school of hard knocks: ship an open-source\nprogram and see what complaints you get. I'd rather rely on experience\npreviously gained than learn those lessons again...\n\n>> One thing I hope we will be able to do sometime soon is build in an\n>> object directory tree separate from the source tree... can't\n>> realistically do that with any non-GNU make that I've heard of.\n\n> I'm planning to work on that for 7.1. But here's an interesting tidbit:\n> Automake does support this feature but in its manual it claims that it\n> does not use any GNU make specific features.\n\nYeah? Do they claim not to need VPATH to do it? I suppose it might\nbe possible, if they are willing to write sufficiently ugly and\nnon-hand-maintainable makefiles. Not sure that's a good tradeoff\nthough.\n\n> And in fact, VPATH exists in both System V's and 4.3 BSD's make.\n\nYou're still confusing two datapoints with the wide world...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 00:48:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "On Wed, 23 Feb 2000, Tom Lane wrote:\n\n> > And in fact, VPATH exists in both System V's and 4.3 BSD's make.\n> \n> You're still confusing two datapoints with the wide world...\n\nI challenge everyone to show me a make without VPATH. In fact, show me two\nmakes without a feature that you can't live without, and I shall forever\nhold my peace. It's certainly easier to say \"let's support yacc, because\nwe actually don't use any non-yacc features\" than saying it for make. But\nit's not the idea to say \"we need GNU make because it has all these\nfeatures\" when 93% of these features in fact exist in all other reasonable\nmakes as well. It's not the end of the world but it's something that\nshouldn't be ignored.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 23 Feb 2000 14:20:44 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "GNU make (Re: [HACKERS] Re: [PATCHES] Patch for more readable\n\tparse error messages)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Wed, 23 Feb 2000, Tom Lane wrote:\n>>>> And in fact, VPATH exists in both System V's and 4.3 BSD's make.\n>> \n>> You're still confusing two datapoints with the wide world...\n\n> I challenge everyone to show me a make without VPATH. In fact, show me two\n> makes without a feature that you can't live without, and I shall forever\n> hold my peace.\n\nOut of the four systems I have easy access to: HPUX 10, HPUX 9, Linux\n(some fairly old RedHat version), and SunOS 4.1.4, two have makes\nwithout VPATH ... and Linux doesn't really count since it's using gmake\nanyway.\n\nNow you can argue that HPUX 9 and SunOS 4.1.4 are dinosaurs that should\nbe put out of their misery, and I wouldn't disagree --- but reality is\nthat a lot of people are running older systems and don't have the time\nor interest to upgrade 'em. \"Portability\" doesn't mean \"portability to\nthe newest and most standards-conformant systems\", it means portability\nto what's actually out there.\n\n> it's not the idea to say \"we need GNU make because it has all these\n> features\" when 93% of these features in fact exist in all other reasonable\n> makes as well.\n\nIf I thought we were anywhere near that close to being able to use old\nmakes, I'd be arguing for removing the GNU-make dependency too. But\nI don't think it's going to be practical...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 11:30:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GNU make (Re: [HACKERS] Re: [PATCHES] Patch for more readable\n\tparse error messages)"
},
{
"msg_contents": "At 11:12 22-02-00 -0500, Tom Lane wrote:\n>Jeroen van Vianen <[email protected]> writes:\n> > What I see now is something like this (for psql):\n>\n> > psql sends a query\n> > psql reads response\n> > if response is error\n> > get error location and find context in which error \n> occurred\n> > print error message, with error location and context\n> > otherwise\n> > do what it used to do\n>\n> > and for the other interfaces nothing changes.\n>\n> > This is something I might be able to implement for 7.1.\n>\n>This looks much better to me than doing it in the backend. What still\n>needs a little thought is how to send back the error location from\n>backend to client app.\n>\n>I'd be inclined to say that the location info should be imbedded as\n>text in the existing textual error message, rather than trying to add\n>a separate message with a machine-readable location value. The first\n>way is much less likely to create compatibility problems with old client\n>apps. One way to do it is to say that if the last line of the error\n>message looks like\n>\n>Error-location: nnn\n>\n>then libpq should recognize that, strip the line out of the saved\n>textual error message, and make the location value available through\n>a new API call.\n\nIsn't it possible to get this kind of information from a call to a new API \nstruct errorinfo * PQerrorInfo(conn) where the struct contains info about \nthe error message, location and code, rather than a call to \nPQerrorMessage(conn) ?\n\n>The reason I suggest a label is that we could further extend this\n>protocol to handle some other things that people have been griping\n>about for a long time: providing identifying error code numbers that\n>client code could rely on instead of trying to match against the error\n>text, and separating out the info about which routine generated the\n>error (which is mighty handy for backend debugging but is useless\n>info for Joe Average user). Someday the message being sent back\n>might look less like\n>\n>ERROR: relation_info: Relation 12345 not found\n>\n>and more like\n>\n>ERROR: Failed to find relation OID 12345 in system catalogs\n>Error-code: 4242\n>Reporting-routine: relation_info, plancat.c line 543\n>\n>of which only the first line is really meant for the user.\n\nThis might even allow the client app to write out a customized error \nmessage, instead of 'foreign key ... violation' write 'You cannot delete \nany ... when there are still ...', based upon error codes.\n\n>Of course, making that happen will be a lot of work, and I'm not\n>asking you to volunteer for it. But what you do now should fit\n>in with further development of the error handling stuff...\n\n\n\nJeroen\n\n",
"msg_date": "Thu, 24 Feb 2000 09:35:19 +0100",
"msg_from": "Jeroen van Vianen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse\n\terror messages"
},
{
"msg_contents": "On Wed, 23 Feb 2000, Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> > I challenge everyone to show me a make without VPATH. In fact, show me two\n> > makes without a feature that you can't live without, and I shall forever\n> > hold my peace.\n> \n> Out of the four systems I have easy access to: HPUX 10, HPUX 9, Linux\n> (some fairly old RedHat version), and SunOS 4.1.4, two have makes\n> without VPATH\n\nYou win. ;)\n\nI surveyed several machines as well (Solaris, IRIX, FreeBSD, HPUX) which\nall had this feature. I feel better now with actual data points, I hope\nthat's fair enough.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 13:51:40 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GNU make (Re: [HACKERS] Re: [PATCHES] Patch for more readable\n\tparse error messages)"
},
{
"msg_contents": "On 2000-02-24, Jeroen van Vianen mentioned:\n\n> At 11:12 22-02-00 -0500, Tom Lane wrote:\n\n> >I'd be inclined to say that the location info should be imbedded as\n> >text in the existing textual error message, rather than trying to add\n> >a separate message with a machine-readable location value.\n\n> Isn't it possible to get this kind of information from a call to a new API \n> struct errorinfo * PQerrorInfo(conn) where the struct contains info about \n> the error message, location and code, rather than a call to \n> PQerrorMessage(conn) ?\n\nIMHO, the use of error messages in PostgreSQL has a big conceptual\nproblem. It's only too tempting to write elog(ERROR, \"I don't know what to\ndo now.\") anywhere and any time. This is very convenient for the\ndevelopers but not very nice for client applications that want to\nrecognize, categorize, and recover from errors. There isn't even a clean\nseparation of perfectly normal user-level errors (\"referential integrity\nviolation\") and internal errors (bugs) (\"can't attach node 718 to\nT_ParseNodeFoo\"). Sure, there's FATAL, but it's not always appropriate.\n\nChapter 22 of SQL92 defines error codes (\"SQLSTATE\") for (presumably)\nevery condition that could come up. It has classes and subclasses and\nits code space is extensible. It would be very nice if we could classify\nerror messages in the backend according to that list and, say, do an\n\n\terror(PGE_TRIGGERED_DATA_CHANGE_VIOLATION);\n\ninstead. The frontend could then call PQsqlstate(connection) to get this\ncode, or it could call something equivalent to strerror that would convert\nthis code to a string (potentially language-dependent even). If someone\nwants to communicate an internal yet non-fatal error, there would be a\nspecial code reserved for it, telling the client application that it might\nas well forget about it. Legacy applications could still call\nPQerrorMessage which would interally call the above two.\n\nA necessary extension to the above would be a way to pass along supportive \ndata. The tricky part will be to figure out a syntax that is not too\ncumbersome, not too restrictive, and encourages help by the compiler. For\nexample,\n\n\terror(PG_PARSE_ERROR, 2345)\n\terror(PG_PARSE_ERROR(2345))\n\terror(PG_PARSE_ERROR, errorIntData(2345))\n\terror(PG_INTERNAL, errorStrData(\"I'm way lost\"))\n\nor something hopefully much better. If error() is made a macro, then we\ncould include file and line number and have some libpq accessor function\nfor them. Somehow, the client should also be able to access the \"2345\"\ndirectly.\n\nIn any case, I believe that the actual error message string should be\nassembled in the front-end. I'm not too fond of the idea of letting\nclients parse out the interesting parts of an error out of a blob of text.\n\nComments? Anyone interested? This would be very dear to my heart so I'd be\nvery willing to spend a lot of time on it.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 25 Feb 2000 00:38:47 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "> In any case, I believe that the actual error message string should be\n> assembled in the front-end. I'm not too fond of the idea of letting\n> clients parse out the interesting parts of an error out of a blob of text.\n> \n> Comments? Anyone interested? This would be very dear to my heart so I'd be\n> very willing to spend a lot of time on it.\n\nVadim strongly believes in error mesage numbers. We certainly should do\nbetter, if only to print a code before the error code or something.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 18:50:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> IMHO, the use of error messages in PostgreSQL has a big conceptual\n> problem. It's only too tempting to write elog(ERROR, \"I don't know what to\n> do now.\") anywhere and any time. This is very convenient for the\n> developers but not very nice for client applications that want to\n> recognize, categorize, and recover from errors.\n\nThe vast majority of the one-off error messages are internal consistency\nchecks. It seems to me that a workable compromise is to insist on\nstandardized error codes/texts for reporting user mistakes, but to\ncontinue to allow spur-of-the-moment messages for internal errors.\nMost or all internal errors would have the same classification anyway\nfrom the point of view of an application trying to decide what to do,\nso they could all share one or a few \"error ID numbers\".\n\n> A necessary extension to the above would be a way to pass along supportive \n> data. The tricky part will be to figure out a syntax that is not too\n> cumbersome, not too restrictive, and encourages help by the compiler.\n\nA printf/elog-like syntax should still work --- the message catalog that\nPGE_TRIGGERED_DATA_CHANGE_VIOLATION indexes into would contain strings\nthat still have %-escapes, but that shouldn't make life any more\ndifficult for internationalization. And we do have the opportunity\nto check mistakes with gcc, if we stick to the standard printf escapes.\n\nOr do we? Hmm ... not if the error message text isn't available at\nthe call site ... Here's a thought: suppose that error code macros like\nPGE_TRIGGERED_DATA_CHANGE_VIOLATION normally expand to an error code\nnumber, which eventually gets used as an index into a localizable table\nof error format strings; but we have the option to run with header files\nthat define all these macros as the actual error message literal\nstrings. Then gcc could check for parameter mismatch in that case.\nFor development work that might even be the normal thing, and only\nin production scenarios would you introduce the extra level of\nindirection to get to an error message string.\n\n> In any case, I believe that the actual error message string should be\n> assembled in the front-end.\n\nThat will not work, because the set of possible error messages will\nundoubtedly change with every backend release, and we do *not* want\nto get into a situation where out-of-date clients mean you get no\nerror message (or worse, a wrong error message). It will be better\nto have the message table on the backend side. As long as the backend\nships an identifying code number along with the message text, I think\nthat will satisfy the needs of applications to avoid reverse-parsing\nerror messages.\n\nOther than that, I agree with everything you say ;-)\n\n> Comments? Anyone interested? This would be very dear to my heart so I'd be\n> very willing to spend a lot of time on it.\n\nIt will take a lot of time to clean this up, but I think everyone agrees\nwe need to do it. It's just been a matter of someone taking on the job.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 19:17:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse error\n\tmessages"
},
{
"msg_contents": "At 06:50 PM 2/24/00 -0500, Bruce Momjian wrote:\n>> In any case, I believe that the actual error message string should be\n>> assembled in the front-end. I'm not too fond of the idea of letting\n>> clients parse out the interesting parts of an error out of a blob of text.\n>> \n>> Comments? Anyone interested? This would be very dear to my heart so I'd be\n>> very willing to spend a lot of time on it.\n>\n>Vadim strongly believes in error mesage numbers. We certainly should do\n>better, if only to print a code before the error code or something.\n\nI do, too. Anyone else with a language implementation background is likely\nto share that bias.\n\nFor starters ... you can at least imagine doing things like provide error\nmessages in languages other than English. Actually...Vadim could probably\nforce the issue by commiting a version with all the error messages in\nRussian! Hmmm...wonder if he's thought of that? :)\n\nAnd for applications it often makes a lot more sense to just get a \ndefined code.\n\nWhen I improved on the AOLserver driver for Postgres, one of my goals\nwas to have it survive the closing of a backend. This gets less \ncrucial with each bug fix, but, heck ... the backend still pees its\npants and crashes occasionally, let's face it. In this case, the\ndriver wants to reestablish the connection to the backend (because\nit's being managed as part of a persistent pool of connections by\nthe web server) but return an error.\n\nAfterwards, all other backends close themselves and pass back a\ndelightfully wordy message that one should retry their query because\nit didn't really crash, but rather is closing just in case shared\nmemory has been corrupted by the very naughty backend that really\ndid crash. In this case, the driver wants to reconnect and \nretry the query, and if it succeeds return normally, with the\nweb server none the wiser.\n\n(works great, BTW)\n\nThere's no documented way to distinguish between the two kinds of\nbackend closures that I could find. Interpreting the string in\ngeneral seems to be how one is expected to probe to determine exactly\nwhat has happened, not only in this case but with other errors, too.\n\nThis sucks, IMO.\n\nIt turns out there's a trivial way to distinguish these two particular\ncases I mention, without resorting to looking at the actual error message,\nbut I think it illustrates the general kludginess of returning strings\nwith no error code.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 24 Feb 2000 16:20:43 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse\n\terror messages"
},
{
"msg_contents": "At 07:17 PM 2/24/00 -0500, Tom Lane wrote:\n\n>The vast majority of the one-off error messages are internal consistency\n>checks. It seems to me that a workable compromise is to insist on\n>standardized error codes/texts for reporting user mistakes, but to\n>continue to allow spur-of-the-moment messages for internal errors.\n>Most or all internal errors would have the same classification anyway\n>from the point of view of an application trying to decide what to do,\n>so they could all share one or a few \"error ID numbers\".\n\nI have no problem with this. Why not just prepend them with an \"internal\"\nerror code? Clients can't really do much other than gasp \"omigosh!\" when\nconfronted with an internal error anyway...\n\n>Or do we? Hmm ... not if the error message text isn't available at\n>the call site ... Here's a thought: suppose that error code macros like\n>PGE_TRIGGERED_DATA_CHANGE_VIOLATION normally expand to an error code\n>number, which eventually gets used as an index into a localizable table\n>of error format strings; but we have the option to run with header files\n>that define all these macros as the actual error message literal\n>strings. Then gcc could check for parameter mismatch in that case.\n>For development work that might even be the normal thing, and only\n>in production scenarios would you introduce the extra level of\n>indirection to get to an error message string.\n\nSomething like this sounds like a fine.\n\n>> In any case, I believe that the actual error message string should be\n>> assembled in the front-end.\n>\n>That will not work, because the set of possible error messages will\n>undoubtedly change with every backend release, and we do *not* want\n>to get into a situation where out-of-date clients mean you get no\n>error message (or worse, a wrong error message). It will be better\n>to have the message table on the backend side.\n\nYes, this is where it belongs. An application gets an error number,\nthen asks for a message to go with it if it wants one. Or, the\nerror's returned as an error code and message, either way. \n\n> As long as the backend\n>ships an identifying code number along with the message text, I think\n>that will satisfy the needs of applications to avoid reverse-parsing\n>error messages.\n\nYep.\n\n>\n>Other than that, I agree with everything you say ;-)\n>\n>> Comments? Anyone interested? This would be very dear to my heart so I'd be\n>> very willing to spend a lot of time on it.\n>\n>It will take a lot of time to clean this up, but I think everyone agrees\n>we need to do it. It's just been a matter of someone taking on the job.\n\nGo, Peter!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 24 Feb 2000 16:33:41 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCHES] Patch for more readable parse\n\terror messages"
},
{
"msg_contents": "Here is more information about it.\n\n> Jeroen van Vianen <[email protected]> writes:\n> >> Does this work with a non-bison parser? It looks mighty\n> >> bison-dependent to me...\n> \n> > I'm not sure, but it probably is flex dependent (but Postgres always needed \n> > flex anyway). I'm not aware of any yacc / byacc / bison dependencies. Don't \n> > know if anybody has been successful building Postgres with another parser \n> > generator.\n> \n> Um, you're right of course --- those are lexer not parser datastructures\n> you're poking into. Sorry for my confusion.\n> \n> We do in fact work with non-bison parser generators, or did last time\n> I tried it (around 6.5 release). I would not like us to stop working\n> with non-bison yaccs, since bison's output depends on alloca() which\n> is not available everywhere.\n> \n> I'm not sure about the situation with lexers. We have been saying for\n> a long time that flex was required, but since we got rid of the\n> scanner's use of trailing context ('/' rules) I think there is a better\n> chance that it would work with vanilla lex. Anyone want to try that\n> with current sources?\n> \n> > BTW, as we ship flex's output lex.yy.c (as scan.c) and bison's output\n> > (gram.c) in the distribution, any user would be able to compile the\n> > sources, but if they want to start hacking the .l or .y files, they'll\n> > need appropriate tools.\n> \n> Right. I am not aware of any portability problems with flex's output\n> as there are with bison's, so it may be that the concern is moot.\n> We may just be able to say \"use the prebuilt scan.c or get flex; we\n> don't care about supporting vendor lexes anymore\".\n> \n> I do see a potential problem with this patch that's not related to\n> portability questions; it is that you're assuming that the lexer's\n> furthest penetration into the source text is a good place to point\n> at for parser errors. That may not be true always. In particular,\n> I've been advocating solving some other problems by inserting a\n> one-token lookahead buffer between the parser and the lexer. If that\n> happens then you'd be off by (at least) one token in some cases.\n> \n> I think the way that this sort of thing is customarily handled in\n> \"real\" compilers is that each token carries along an indication of\n> just where it was found in the source, and then error messages can\n> finger the right place without making assumptions about synchronization\n> between different phases of the scanning/parsing process. That might\n> be more work than we can justify for SQL queries; not sure.\n> \n> BTW, I think that the immediate problem of generating a good error\n> message for unterminated comments and literals could be solved in other\n> ways. This patch or something like it might be cool anyway, but you\n> should bear in mind that printing out a query and then a marker that's\n> supposed to line up with something in the query doesn't always work\n> all that well. Consider a query that's several dozen lines long,\n> such as a large table definition. If we had more control over the\n> user interface and could highlight the offending token directly,\n> I'd be more excited about doing something like this. (Actually, you\n> could partially address that problem by only printing one line's worth\n> of query text leading up to the error marker point. It would still be\n> tricky to get it right in the presence of newlines, tabs, etc.)\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Jun 2000 08:40:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Patch for more readable parse error messages"
}
] |
[
{
"msg_contents": "Hi,\n\nThe following phenomenon was reported to pgsql-jp(ML in Japan).\n\nrest=# select -1234567890.1234567;\nERROR: Unable to convert left operator '-' from type 'unknown'\n\n-1234567890.1234567 is treated as - '1234567890.1234567'\nas the following comment in scan.l says.\n\n /* we no longer allow unary minus in numbers.\n * instead we pass it separately to parser. there it gets\n * coerced via doNegate() -- Leon aug 20 1999\n */\n\nHowever doNegate() does nothing for SCONST('1234567890.1234567').\nI don't understand where or how to combine '-' and numeric SCONST.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 21 Feb 2000 16:06:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Numeric with '-'"
},
{
"msg_contents": "A strange thing I noticed with this is that\n\n\"select -234567890.1234567;\" works and\n\"select -1234567890.123456;\" also works while\n\"select -1234567890.1234567;\" does not. That\nextra character just seems to push things over\nthe edge.\n\nIt almost seems like there is some sort of length\nrestriction somewhere in the parser.\n\nOn Mon, Feb 21, 2000 at 04:06:07PM +0900, Hiroshi Inoue wrote:\n> Hi,\n> \n> The following phenomenon was reported to pgsql-jp(ML in Japan).\n> \n> rest=# select -1234567890.1234567;\n> ERROR: Unable to convert left operator '-' from type 'unknown'\n> \n> -1234567890.1234567 is treated as - '1234567890.1234567'\n> as the following comment in scan.l says.\n> \n> /* we no longer allow unary minus in numbers.\n> * instead we pass it separately to parser. there it gets\n> * coerced via doNegate() -- Leon aug 20 1999\n> */\n> \n> However doNegate() does nothing for SCONST('1234567890.1234567').\n> I don't understand where or how to combine '-' and numeric SCONST.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> ************\n\n-- \nThe world's most ambitious and comprehensive PC game database project.\n\n http://www.mobygames.com\n",
"msg_date": "Mon, 21 Feb 2000 01:54:37 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-'"
},
{
"msg_contents": "> -----Original Message-----\n> From: Brian Hirt [mailto:[email protected]]\n> \n> A strange thing I noticed with this is that\n> \n> \"select -234567890.1234567;\" works and\n> \"select -1234567890.123456;\" also works while\n> \"select -1234567890.1234567;\" does not. That\n> extra character just seems to push things over\n> the edge.\n> \n> It almost seems like there is some sort of length\n> restriction somewhere in the parser.\n>\n\nCurrently numeric constants are FLOAT8 constants if the\nthe precision <= 17 otherwise string constants.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n \n\n",
"msg_date": "Mon, 21 Feb 2000 17:53:20 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Numeric with '-'"
},
{
"msg_contents": "Brian Hirt <[email protected]> writes:\n> \"select -1234567890.123456;\" also works while\n> \"select -1234567890.1234567;\" does not. That\n> extra character just seems to push things over\n> the edge.\n\n> It almost seems like there is some sort of length\n> restriction somewhere in the parser.\n\nIndeed there is, and you'll find it at src/backend/parser/scan.l\nline 355 (in current sources). The lexer backs off from \"float\nconstant\" to \"unspecified string constant\" in order to avoid losing\nprecision from conversion to float. Which is fine, except that\nwithout any cue that the constant is numeric, the parser is unable\nto figure out what to do with the '-' operator.\n\nI've been ranting about this in a recent pghackers thread ;-).\nThe lexer shouldn't have to commit to a conversion to float8\nin order to report that a token looks like a numeric literal.\n\nThe resulting error message\nERROR: Unable to convert left operator '-' from type 'unknown'\nisn't exactly up to a high standard of clarity either; what it\nreally means is \"unable to choose a unique left operator '-'\nfor type 'unknown'\", and it ought to suggest adding an explicit\ncast. I'll see what I can do about that. But the right way to\nfix the fundamental problem is still under debate.\n\nIn the meantime you can provide the parser a clue with an\nexplicit cast:\n\nplay=> select -1234567890.1234567::numeric;\n ?column?\n-----------------\n-1234567890.12346\n(1 row)\n\nThis still seems a little broken though, since it looks like the\nconstant's precision is getting truncated to 15 digits; presumably\nthere's a coercion to float happening in there somewhere, but I\ndon't understand where at the moment...\n\nA few minutes later: yes I do: there's no unary minus operator\ndefined for type numeric, so the parser does the best it can\nby applying float8um instead. Jan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 04:00:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> The following phenomenon was reported to pgsql-jp(ML in Japan).\n\n> rest=# select -1234567890.1234567;\n> ERROR: Unable to convert left operator '-' from type 'unknown'\n\nI've committed fixes that make the parser treat numeric literals\nthe same no matter how many digits they have. With current sources,\n\nregression=# select -1234567890.1234567;\n ?column?\n-------------------\n -1234567890.12346\n(1 row)\n\nwhich is probably still not what you want, because the default\ntype for a non-integer literal is float8 in the absence of any\ncontext to clue the system otherwise, so you lose precision.\nYou can do\n\nregression=# select -1234567890.12345678900::numeric;\n ?column?\n-------------------------\n -1234567890.12345678900\n(1 row)\n\nbut in reality that's only working because of the way that doNegate\nworks on literals; since there is no unary minus operator for NUMERIC,\na minus on a non-constant value is going to be coerced to float8:\n\nregression=# select -val from num_data;\n ?column?\n------------------\n 0\n 0\n 34338492.215397\n -4.31\n -7799461.4119\n -16397.038491\n -93901.57763026\n 83028485\n -74881\n 24926804.0450474\n(10 rows)\n\nwhereas this works right:\n\nregression=# select 0-val from num_data;\n ?column?\n---------------------\n 0.0000000000\n 0.0000000000\n 34338492.2153970470\n -4.3100000000\n -7799461.4119000000\n -16397.0384910000\n -93901.5776302600\n 83028485.0000000000\n -74881.0000000000\n 24926804.0450474200\n(10 rows)\n\nSomebody ought to write a NUMERIC unary minus...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 14:52:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "On 2000-02-21, Tom Lane mentioned:\n\n> I've been ranting about this in a recent pghackers thread ;-).\n> The lexer shouldn't have to commit to a conversion to float8\n> in order to report that a token looks like a numeric literal.\n\nHas the ranting resulted in any idea yet? ISTM that keeping a non-integer\nnumber as a string all the way to the executor shouldn't hurt too much.\nAfter all, according to SQL 123.45 *is* a NUMERIC literal! By making it a\nfloat we're making our users liable to breaking all kinds of fiscal\nregulations in some places. (Ask Jan.)\n\n> The resulting error message\n> ERROR: Unable to convert left operator '-' from type 'unknown'\n> isn't exactly up to a high standard of clarity either;\n\nSpeaking of 'unknown', this is my favourite brain-damaged query of all\ntimes:\n\npeter=> select 'a' like 'a';\nERROR: Unable to identify an operator '~~' for types 'unknown' and 'unknown'\n You will have to retype this query using an explicit cast\n\nIs there a good reason that a character literal is unknown? I'm sure the\nreasons lie somewhere in the extensible type system, but if I wanted it to\nbe something else explicitly then I would have written DATE 'yesterday'.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 00:57:34 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "At 12:57 AM 2/22/00 +0100, Peter Eisentraut wrote:\n\n>Has the ranting resulted in any idea yet? ISTM that keeping a non-integer\n>number as a string all the way to the executor shouldn't hurt too much.\n>After all, according to SQL 123.45 *is* a NUMERIC literal! By making it a\n>float we're making our users liable to breaking all kinds of fiscal\n>regulations in some places. (Ask Jan.)\n\nCertainly there was a time in the past, at least, where cross-compilers\nfrequently did something along these lines, if they were designed\nto support a variety of target architectures. Not so common now in the\ncompiler world since typically host and target both support IEEE\nstandard floating point operations, but 'twas so back in the days before\nthe standard existed and before hardware implementations proliferated.\nIt wouldn't impact the performance of query parsing and analysis noticably.\n\nYou have to take care when (for instance) folding operations on\nconstants - I suspect that somewhere in the 50K lines of the SQL92\ndraft or the 83K lines of the SQL3 draft precise rules for such \nthings are laid down. Though probably in an incomprehensible fashion!\n\n>Speaking of 'unknown', this is my favourite brain-damaged query of all\n>times:\n>\n>peter=> select 'a' like 'a';\n>ERROR: Unable to identify an operator '~~' for types 'unknown' and 'unknown'\n> You will have to retype this query using an explicit cast\n\nThat *is* very cool! :) Postgres is an amazing beast at times!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 16:28:53 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > The following phenomenon was reported to pgsql-jp(ML in Japan).\n> \n> > rest=# select -1234567890.1234567;\n> > ERROR: Unable to convert left operator '-' from type 'unknown'\n> \n> I've committed fixes that make the parser treat numeric literals\n> the same no matter how many digits they have. With current sources,\n> \n> regression=# select -1234567890.1234567;\n> ?column?\n> -------------------\n> -1234567890.12346\n> (1 row)\n> \n> which is probably still not what you want,\n\nHmm,this may be worse than before.\nINSERT/UPDATE statements would lose precision without\ntelling any error/warnings.\n\n> because the default\n> type for a non-integer literal is float8 in the absence of any\n> context to clue the system otherwise, so you lose precision.\n> You can do\n>\n\nShouldn't decimal constants be distinguished from real constants ?\nFor example, decimal --> NCONST -> T_Numreic Value -> \nConst node of type NUMERICOID .... \n\nComments ?\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 22 Feb 2000 10:13:57 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,this may be worse than before.\n> INSERT/UPDATE statements would lose precision without\n> telling any error/warnings.\n\nThey didn't give any such warning before, either. I doubt I've\nmade anything worse.\n\n> Shouldn't decimal constants be distinguished from real constants ?\n\nWhy? I don't see any particularly good reason for distinguishing\n1234567890.1234567890 from 1.2345678901234567890e9. (numeric_in\ndoes accept both these days, BTW.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 22:34:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Hmm,this may be worse than before.\n> > INSERT/UPDATE statements would lose precision without\n> > telling any error/warnings.\n> \n> They didn't give any such warning before, either. I doubt I've\n> made anything worse.\n>\n\nBefore your change\nINSERT into t (numdata) values (-1234567890.1234567);\ncaused an error\nERROR: Unable to convert left operator '-' from type 'unknown'.\nbut currently inserts a constant -1234567890.12346.\nand\nINSERT into t (numdata) values (1234567890.1234567);\ninserted a numeric constant 1234567890.1234567 precisely\nbut currently inserts a constant 1234567890.12346.\n\n> > Shouldn't decimal constants be distinguished from real constants ?\n> \n> Why? I don't see any particularly good reason for distinguishing\n> 1234567890.1234567890 from 1.2345678901234567890e9. (numeric_in\n> does accept both these days, BTW.)\n>\n\nAccording to a book about SQL92 which I have,SQL92 seems to\nrecommend it.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Tue, 22 Feb 2000 12:57:41 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-02-21, Tom Lane mentioned:\n>> I've been ranting about this in a recent pghackers thread ;-).\n>> The lexer shouldn't have to commit to a conversion to float8\n>> in order to report that a token looks like a numeric literal.\n\n> Has the ranting resulted in any idea yet? ISTM that keeping a non-integer\n> number as a string all the way to the executor shouldn't hurt too much.\n\nWell, actually it's sufficient to keep it as a string until the type\nanalyzer has figured out what data type it's supposed to be; then you\ncan feed it to that type's typinput conversion routine. After that\nit's not the parser's problem anymore ;-).\n\nI committed changes to do exactly that this morning. Thomas had been\nsaying that integer literals should be kept as strings too, but I don't\nbelieve that and didn't do it.\n\n> peter=> select 'a' like 'a';\n> ERROR: Unable to identify an operator '~~' for types 'unknown' and 'unknown'\n> You will have to retype this query using an explicit cast\n\n> Is there a good reason that a character literal is unknown? I'm sure the\n> reasons lie somewhere in the extensible type system, but if I wanted it to\n> be something else explicitly then I would have written DATE 'yesterday'.\n\nRemember that constants of random types like \"line segment\" have to\nstart out as character literals (unless you want to try to pass them\nthrough the lexer and parser undamaged without quotes). So untyped\ncharacter literal has to be a pretty generic thing. It might be a good\nidea for the type analyzer to try again with the assumption that the\nliteral is supposed to be type text, if it fails to find an\ninterpretation without that assumption --- but I think this is a\nticklish change that could have unwanted consequences. It'd need\nsome close examination.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 23:05:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> They didn't give any such warning before, either. I doubt I've\n>> made anything worse.\n\n> Before your change\n> INSERT into t (numdata) values (-1234567890.1234567);\n> caused an error\n> ERROR: Unable to convert left operator '-' from type 'unknown'.\n> but currently inserts a constant -1234567890.12346.\n\nYipes, you are right. I think that that sort of construct should\nresult in the value not getting converted at all until the parser\nknows that it must be converted to the destination column's type.\nLet me see if I can find out what's going wrong. If this doesn't\nseem to be fixable, I may have to back off the patch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 23:53:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "On 2000-02-21, Tom Lane mentioned:\n\n> > Is there a good reason that a character literal is unknown? I'm sure the\n> > reasons lie somewhere in the extensible type system, but if I wanted it to\n> > be something else explicitly then I would have written DATE 'yesterday'.\n> \n> Remember that constants of random types like \"line segment\" have to\n> start out as character literals\n\nA constant of type line segment looks like this:\nLSEG 'whatever'\nThis is an obvious extension of the standard. (Also note that this is\n*not* a cast.)\n\nThe semantics of SQL throughout are that if I write something of the form\nquote-characters-quote, it's a character literal. No questions asked. Now\nif I pass a character literal to a datetimeish function, it's on obvious\ncast. If I pass it to a geometry function, it's an obvious cast. If I pass\nit to a generic function, it's a character string.\n\nIt seems that for the benefit of a small crowd -- those actually using\ngeometric types and being too lazy to type their literals in the above\nmanner -- we are creating all sorts of problems for two much larger\ncrowds: those trying to use their databases in an normal manner with\nstrings and numbers, and those trying develop for this database that never\nknow what type a literal is, when it should be obvious. I am definitely\nfor a close examination of this one.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 23 Feb 2000 02:21:04 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Remember that constants of random types like \"line segment\" have to\n>> start out as character literals\n\n> A constant of type line segment looks like this:\n> LSEG 'whatever'\n> This is an obvious extension of the standard. (Also note that this is\n> *not* a cast.)\n\nYes it is. On what grounds would you assert that it isn't? Certainly\nnot on the basis of what comes out of gram.y; all three of these\nproduce exactly the same parsetree:\n\tLSEG 'whatever'\n\t'whatever'::LSEG\n\tCAST('whatever' AS LSEG)\n\n> It seems that for the benefit of a small crowd -- those actually using\n> geometric types and being too lazy to type their literals in the above\n> manner -- we are creating all sorts of problems for two much larger\n> crowds\n\nAu contraire. The real issue here is how to decide which numeric type\nto use for an undecorated but numeric-looking literal token. I don't\nthink that's a non-mainstream problem, and I definitely don't think\nthat telling the odd-datatype crowd to take a hike will help fix it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 00:12:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "On Wed, 23 Feb 2000, Tom Lane wrote:\n\n> Au contraire. The real issue here is how to decide which numeric type\n> to use for an undecorated but numeric-looking literal token. I don't\n\nYou lost me. How does that relate to the character types? You are not\nsuggesting that '123.456' should be considered a number? It seems pretty\nclear to me that anything of the form [0-9]+ is an integer, something with\nan 'e' in it is a float, and something with only digits and decimal points\nis numeric. If passing around an 'numeric' object is too expensive, keep\nit as a string for a while longer. As you did.\n\n> think that's a non-mainstream problem, and I definitely don't think\n> that telling the odd-datatype crowd to take a hike will help fix it.\n\nIt remains to be shown how big that \"hike\", if at all existent, would be\n...\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 23 Feb 2000 14:00:16 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ... It seems pretty\n> clear to me that anything of the form [0-9]+ is an integer, something with\n> an 'e' in it is a float, and something with only digits and decimal points\n> is numeric.\n\nSo 123456789012345678901234567890 is an integer? Not on the machines\nI use. Nor do I want to restrict 1234567890.1234567890e20 or 1e500\nto be considered always and only floats; the first will drop precision\nand the second will overflow, whereas they are both perfectly useful\nas numeric.\n\nWhat I'd originally hoped was that we could postpone determining the\ntype of a numeric literal until we saw where it was going to be used,\nas in Hiroshi's INSERT into t (numdata) values (-1234567890.1234567);\nexample. Unfortunately that doesn't work in some other fairly\nobvious cases, like SELECT 1.2 + 3.4; you just plain don't have any\nother cues except the sizes and precisions of the constants to resolve\nthe type here.\n\nSo the original code was right, I think, to the extent that it looked\nat the precision and size of the constant to select a default type\nfor the constant. But it wasn't right to lose the numeric-ness of the\nconstant altogether when it doesn't fit in a double. What I'm testing\nnow is code that generates either INT4, FLOAT8, or NUMERIC depending\non precision and size --- but never UNKNOWN, which is what you'd get\nbefore with more than 17 digits.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 11:18:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> They didn't give any such warning before, either. I doubt I've\n> >> made anything worse.\n> \n> > Before your change\n> > INSERT into t (numdata) values (-1234567890.1234567);\n> > caused an error\n> > ERROR: Unable to convert left operator '-' from type 'unknown'.\n> > but currently inserts a constant -1234567890.12346.\n> \n> Yipes, you are right. I think that that sort of construct should\n> result in the value not getting converted at all until the parser\n> knows that it must be converted to the destination column's type.\n> Let me see if I can find out what's going wrong. If this doesn't\n> seem to be fixable, I may have to back off the patch...\n>\n\nThis seems to be fixed.\nThanks a lot.\n\nHowever there still remains the following case.\nselect * from num_data where val = 1.1;\nERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n You will have to retype this query using an explicit cast\n\nSQL standard seems to say 1.1 is a numeric constant and\nit's not good to treat a numeric value as an aproximate value.\nFor example,what do you think about the following.\n\nselect 11111111111111 * 1.1;\n ?column? \n------------------\n 12222222222222.1\n(1 row)\n\nselect 111111111111111 * 1.1;\n ?column? \n-----------------\n 122222222222222\n(1 row)\n\nselect 100000000 + .000001;\n ?column? \n------------------\n 100000000.000001\n(1 row)\n\nselect 100000000 + .0000001;\n ?column? \n-----------\n 100000000\n(1 row)\n\nselect 100000000.0000001;\n ?column? \n-------------------\n 100000000.0000001\n(1 row)\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Sun, 27 Feb 2000 08:09:00 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Numeric with '-' "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> However there still remains the following case.\n> select * from num_data where val = 1.1;\n> ERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n> You will have to retype this query using an explicit cast\n\nYeah. I'm not sure that that can be fixed without a major redesign of\nthe type-conversion hierarchy, which is not something I care to try\nduring beta ;-).\n\nIn fact, it's arguable that the system is doing the right thing by\nforcing the user to specify whether he wants a NUMERIC or FLOAT8\ncomparison to be used. There are other examples where we *must*\nrefuse to decide. For example:\n\nregression=# create table ff (f1 char(8), f2 varchar(20));\nCREATE\nregression=# select * from ff where f1 = f2;\nERROR: Unable to identify an operator '=' for types 'bpchar' and 'varchar'\n You will have to retype this query using an explicit cast\n\nThis is absolutely the right thing, because bpchar and varchar do not\nhave the same comparison semantics (trailing blanks are significant in\none case and not in the other), so the user has to tell us which he\nwants.\n\n> SQL standard seems to say 1.1 is a numeric constant and\n> it's not good to treat a numeric value as an aproximate value.\n> For example,what do you think about the following.\n\nThat argument is untenable. NUMERIC has limitations just as bad as\nFLOAT's; they're merely different. For example:\n\nregression=# select 1.0/300000.0;\n ?column?\n----------------------\n 3.33333333333333e-06\n(1 row)\n\nregression=# select 1.0::numeric / 300000.0::numeric;\n ?column?\n--------------\n 0.0000033333\n(1 row)\n\nNotice the completely unacceptable loss of precision ;-) in the second\ncase.\n\nWhen you look at simple cases like \"var = constant\" it seems easy to\nsay that the system should just do the right thing, but in more complex\ncases it's not always easy to know what the right thing is.\n\nI think what you are proposing is to change the system's default\nassumption about decimal constants from float8 to numeric. I think\nthat's a very risky change that is likely to break existing applications\n(and if we throw in automatic conversions, it'll break 'em silently).\nI'm not eager to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Feb 2000 18:46:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Numeric with '-' "
}
] |
[
{
"msg_contents": "\nNot having heard anything otherwise, assume we can go Beta today, as\nplanned?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 21 Feb 2000 09:45:35 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Beta for 4:30AST ... ?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Not having heard anything otherwise, assume we can go Beta today, as\n> planned?\n\nMight as well --- we have open issues, but there'll always be open\nissues. I don't think anyone is still hoping to shoehorn new features\ninto 7.0, just bug fixes. (Wait, does a unary-minus operator for\nnumeric count as a new feature ;-) ?)\n\nI do suggest that we had better commit the current state of the rules\nregress test output as the expected output, so that we don't have a\nlot of confused beta-testers. An actual fix will have to wait for\nThomas to return, but I don't think we want to put off going to beta\njust because he's on vacation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 10:41:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "At 10:41 AM 2/21/00 -0500, Tom Lane wrote:\n>The Hermit Hacker <[email protected]> writes:\n>> Not having heard anything otherwise, assume we can go Beta today, as\n>> planned?\n>\n>Might as well --- we have open issues, but there'll always be open\n>issues. I don't think anyone is still hoping to shoehorn new features\n>into 7.0, just bug fixes. (Wait, does a unary-minus operator for\n>numeric count as a new feature ;-) ?)\n\nI've been using a snapshot taken a couple of days ago, which includes\nthe new datetime stuff and outer join syntax stuff. I've loaded \nseveral thousand lines of data model which has been ported from Oracle\ninto it, and have been testing our port of the Ars Digita web toolkit\nextensively on it. This consists of literally thousands of queries\nagainst the aforementioned data model, which includes nearly 500 foreign\nkeys including some \"on delete/update cascade\" and \"set null\" clauses.\n\nIt is all working GREAT. Better than 6.5 (in which case the referential\nactions have to be removed anyway, of course), in fact. I've had no\nproblems with the new datetime stuff, which was of particular concern\nto me because the toolkit is full of queries to grab information from\nthe last week, since your last visit to the website, to update robot\ndetection tables, to send e-mail alerts to folks who request them daily\nor weekly (similar to majordomo digests), to update the database's view\nof the site's static content, to build reports on usage of the site,\netc etc. Certain bug fixes really make the port work cleaner, the\n\"group by\" fix (to return no rows if no groups exist) in particular.\n\nEven pg_dump works, though I had to modify a couple of views in order\nto get them reload correctly. If I sound like I was a bit nervous of\npg_dump it has to do with those nearly 500 foreign keys I mentioned.\n\nAnyway, from my POV it sure feels like it should be a very solid beta.\nI've not run across anything causing me to want to switch back to 6.5.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 09:28:23 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> It is all working GREAT. Better than 6.5 (in which case the referential\n> actions have to be removed anyway, of course), in fact. I've had no\n> problems with the new datetime stuff, which was of particular concern\n> to me because the toolkit is full of queries to grab information from\n> the last week, since your last visit to the website, to update robot\n> detection tables, to send e-mail alerts to folks who request them daily\n> or weekly (similar to majordomo digests), to update the database's view\n> of the site's static content, to build reports on usage of the site,\n> etc etc. Certain bug fixes really make the port work cleaner, the\n> \"group by\" fix (to return no rows if no groups exist) in particular.\n> \n> Even pg_dump works, though I had to modify a couple of views in order\n> to get them reload correctly. If I sound like I was a bit nervous of\n> pg_dump it has to do with those nearly 500 foreign keys I mentioned.\n> \n> Anyway, from my POV it sure feels like it should be a very solid beta.\n> I've not run across anything causing me to want to switch back to 6.5.\n\nYou can thank our may testers, and Tom Lane, who is fixing things like\ncrazy. Tom has fixed more PostgreSQL bugs than anyone else in the\nhistory of our development.\n\nThough bug fixing is not a glorious job, without reliability, we are\nuseless.\n\nThanks, Tom.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 12:57:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 12:57 PM 2/21/00 -0500, Bruce Momjian wrote:\n\n>You can thank our may testers, and Tom Lane, who is fixing things like\n>crazy. Tom has fixed more PostgreSQL bugs than anyone else in the\n>history of our development.\n>\n>Though bug fixing is not a glorious job, without reliability, we are\n>useless.\n>\n>Thanks, Tom.\n\nYes, I've noticed that Tom takes on bugs like a pitbull takes on\nmail carriers, and I do appreciate it.\n\nI'm not allergic to fixing bugs, and as I learn more about PG \nhope to dig into doing so with gusto.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 10:08:56 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> Even pg_dump works, though I had to modify a couple of views in order\n> to get them reload correctly. \n\nDon, could you elaborate on what you had to do to make your views\nreload correctly?\n\nCheers,\nEd Loehr\n",
"msg_date": "Mon, 21 Feb 2000 12:11:16 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 12:11 PM 2/21/00 -0600, Ed Loehr wrote:\n>Don Baccus wrote:\n>> \n>> Even pg_dump works, though I had to modify a couple of views in order\n>> to get them reload correctly. \n>\n>Don, could you elaborate on what you had to do to make your views\n>reload correctly?\n\nGood timing - I was about to post on this subject anyway.\n\nI was able to fix my views by changing:\n\ncreate view foo as select * from bar;\n\nto:\n\n...select * from bar bar;\n\nIn other words, an explicit declaration of the range table name (is\nthat the right term?P my mind's numb from porting queries all weekend)\nleads to a rule that will reload.\n\nI figured this out because there are some fairly complex views in\nthis datamodel, which use explicit names to avoid ambiguous column\nreferences.\n\nThe standard actually says that a from clause like \"from bar\" \nimplicitly declares \"bar\" for you, i.e. is exactly equivalent\nto \"from bar bar\". If Postgres name scoping - which I know is\nnot standard-compliant in the JOIN syntax case - is close enough\nso that a transformation of \"from bar\" to \"from bar bar\" could\nbe done in the parser without breaking existing code, then a\nlot more views could be successfully be dumped and reloaded.\n\nWould all views dump/reload, or are there other problems I don't know\nabout? I'm not in a position to judge, I've been too deeply embedded\nin getting this toolkit ready for release (our first will be Wednesday)\nto worry about the general case. However, I do know that doing the\ntransformation by hand in the datamodel source has fixed the problem\nfor me.\n\nDoes anyone know if there are other problems?\n\nEven if there are, a simple transformation such as I describe would\nhelp - IF it didn't break existing code. If it would break existing\ncode, then it is due to non-compliance with the standard so perhaps\nwouldn't be such a terrible thing, either. I'm not really in a\nposition to judge.\n\nWhat do folks think? \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 10:27:15 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> At 12:57 PM 2/21/00 -0500, Bruce Momjian wrote:\n> \n> >You can thank our may testers, and Tom Lane, who is fixing things like\n> >crazy. Tom has fixed more PostgreSQL bugs than anyone else in the\n> >history of our development.\n> >\n> >Though bug fixing is not a glorious job, without reliability, we are\n> >useless.\n> >\n> >Thanks, Tom.\n> \n> Yes, I've noticed that Tom takes on bugs like a pitbull takes on\n> mail carriers, and I do appreciate it.\n\nYes, and we badly needed someone like that before Tom came along. I\nused to do it, but in a very pitiful way. Mostly I just added them to\nthe TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 13:59:07 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> The Hermit Hacker <[email protected]> writes:\n>> Not having heard anything otherwise, assume we can go Beta today, as\n>> planned?\n\n> Might as well --- we have open issues, but there'll always be open\n> issues. I don't think anyone is still hoping to shoehorn new features\n> into 7.0, just bug fixes.\n\nOK, I'm done shoehorning in last-minute bug fixes, too. You may fire\nwhen ready, as far as I'm concerned.\n\nI did commit current rules output per my previous note. Regression\ntests pass 100% clean on my primary box; haven't tried other systems\nyet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 14:06:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> At 12:11 PM 2/21/00 -0600, Ed Loehr wrote:\n> >Don Baccus wrote:\n> >> \n> >> Even pg_dump works, though I had to modify a couple of views in order\n> >> to get them reload correctly. \n> >\n> >Don, could you elaborate on what you had to do to make your views\n> >reload correctly?\n> \n> Good timing - I was about to post on this subject anyway.\n> \n> I was able to fix my views by changing:\n> \n> create view foo as select * from bar;\n> \n> to:\n> \n> ...select * from bar bar;\n> \n> In other words, an explicit declaration of the range table name (is\n\nYes, right name.\n\nI am totally confused why \"from bar bar\" is different from \"bar\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 14:07:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> Tom Lane <[email protected]> writes:\n> > The Hermit Hacker <[email protected]> writes:\n> >> Not having heard anything otherwise, assume we can go Beta today, as\n> >> planned?\n> \n> > Might as well --- we have open issues, but there'll always be open\n> > issues. I don't think anyone is still hoping to shoehorn new features\n> > into 7.0, just bug fixes.\n> \n> OK, I'm done shoehorning in last-minute bug fixes, too. You may fire\n> when ready, as far as I'm concerned.\n\nLast call for partially implemented changes that you want to get in\nbefore feature freeze... :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 14:19:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> I was able to fix my views by changing:\n> create view foo as select * from bar;\n> to:\n> ...select * from bar bar;\n\nHmm, I think I see it.\n\ncreate view foo as select * from int8_tbl;\n\n$ pg_dump -t foo regression\n\\connect - postgres\nCREATE TABLE \"foo\" (\n \"q1\" int8,\n \"q2\" int8\n);\nCREATE RULE \"_RETfoo\" AS ON SELECT TO foo DO INSTEAD SELECT int8_tbl.q1,\nint8_tbl.q2 FROM int8_tbl (q1, q2);\n\nIIRC, Thomas explained that the ANSI syntax says you *must* supply a\ntable alias if you are going to supply any column aliases in FROM.\nThe regurgitated rule violates that.\n\nI guess this is another manifestation of the issue about the system\nshoving in column \"aliases\" that the user never typed. pg_dump is\nprobably repeating what the backend told it. Think we'll have to\nleave it unfixed till Thomas gets back.\n\nIt's also a reminder that the regress tests don't exercise pg_dump :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 14:27:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "At 02:07 PM 2/21/00 -0500, Bruce Momjian wrote:\n\n>\n>I am totally confused why \"from bar bar\" is different from \"bar\".\n\nIn the rule created for the view, the from clause gets generated\nlike this:\n\n\"from foo (list of columns), ...\"\n\nor - if an explicit range table name is given\n\n\"from foo foo (list of columns), ...\"\n\nThe parser doesn't like the first form, is googoo-eyed over\nthe second and takes it without error. I'm too busy to look at Date\nor the SQL standard at the moment, but the list of columns is a non-standard\nPG-ism anyway, isn't it? Something lingering from pre-SQL days?\n\nIs the list of columns even needed? Is this some inheritance-related\nthing?\n\nAs I mentioned in my earlier note, I was too swamped by my porting\neffort to dig into this at all, and between work, the web toolkit,\nand work on http://birdnotes.net won't have time to explore this\nin the next couple of weeks. I did take enough time to see that\nthe rule is built on the parse tree for the underlying select which\nis why the hack of adding the range table name explicitly while\nparsing if it's not mentioned came to mind.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 11:28:04 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 02:27 PM 2/21/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n\n>IIRC, Thomas explained that the ANSI syntax says you *must* supply a\n>table alias if you are going to supply any column aliases in FROM.\n>The regurgitated rule violates that.\n\nAhhh...column aliases...these ARE standard SQL, then! I'll be...\n\nI need to spend a couple of days studying Date thorougly someday, rather\nthan just cherry-picking when specific questions come to mind.\n\n>I guess this is another manifestation of the issue about the system\n>shoving in column \"aliases\" that the user never typed. \n\nYes.\n\n> pg_dump is probably repeating what the backend told it.\n\nMy fifteen minute sprint through the code led me to believe this\nis true. \n\n> Think we'll have to\n>leave it unfixed till Thomas gets back.\n\nThat would be plenty of time to get it in for the real 7.0 release.\n\nIf indeed PG would survive the insertion of the table name as a\ntable alias when none is given - the standard semantics, in other\nwords - it would be very simple to do. I'm just a little queasy\nabout possible side-effects.\n\n>It's also a reminder that the regress tests don't exercise pg_dump :-(\n\nOhhh...that's not good.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 11:38:24 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> At 02:07 PM 2/21/00 -0500, Bruce Momjian wrote:\n> \n> >\n> >I am totally confused why \"from bar bar\" is different from \"bar\".\n> \n> In the rule created for the view, the from clause gets generated\n> like this:\n> \n> \"from foo (list of columns), ...\"\n> \n> or - if an explicit range table name is given\n> \n> \"from foo foo (list of columns), ...\"\n\nGot it:\n\n\ttest=> select * from pg_class pg_class (relname);\n\nWow, that is some strange syntax, and I didn't know we even allowed\nthat.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 14:39:08 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> You can thank our may testers, and Tom Lane, who is fixing things like\n> crazy. Tom has fixed more PostgreSQL bugs than anyone else in the\n> history of our development.\n\nIt's very impressive. I've noticed many times that someone mentions a bug, and\nsometimes hours later Tom has cornered the problem. Maybe one or two questions\nabout how to go about it, and then the hole is plugged.\n",
"msg_date": "Mon, 21 Feb 2000 20:58:44 +0100",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Having been away for some time I'm very anxious to see that there's a 7.0\nrelease coming very soon. I extracted the TODO list from the CVS (latest update\nFebruary 9). The only really really big issue as I see it is referential\nintegrity. This is big, I admit but why going to 7.0 for this? Or is it because\nit's long overdue (MSVC and stuff)?\n\nThere are other things that must have taken a lot of work, only it's not\nmainstream the same way referential integrity is (PL/Perl and more). I had\nhoped to find outer joins, but look forward to the next release if it will be\nthere.\n\nMaybe I missed something in the TODO list or in the fixed list, but I couldn't\nfind VIEWs with UNIONs, which I understand would be solved by a rewrite of the\nrules system.\n",
"msg_date": "Mon, 21 Feb 2000 21:01:26 +0100",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "TODO list / why 7.0 ?"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> Think we'll have to\n>> leave it unfixed till Thomas gets back.\n\n> That would be plenty of time to get it in for the real 7.0 release.\n\nI don't like shipping betas with broken pg_dump; that makes life\nunreasonably difficult for beta testers, if we have to force another\ninitdb before release. So I put in a quick hack solution: don't print\nthe column alias list at all unless there is a table alias. This makes\nthe rule's FROM clause conform to ANSI syntax. If you actually did\nwrite\n\tcreate view foo as SELECT alias FROM table table (alias);\nthen it will dump as\n\tcreate view foo as SELECT table.realcolname AS alias FROM table;\nbut there's no harm done. Better solution needed but I'll let Thomas\nprovide it.\n\nAnd now, it's 4:30 PM AST and we are outta here ... right Marc?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 15:27:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "At 03:27 PM 2/21/00 -0500, Tom Lane wrote:\n\n>I don't like shipping betas with broken pg_dump; that makes life\n>unreasonably difficult for beta testers, if we have to force another\n>initdb before release. So I put in a quick hack solution: don't print\n>the column alias list at all unless there is a table alias. This makes\n>the rule's FROM clause conform to ANSI syntax. If you actually did\n>write\n>\tcreate view foo as SELECT alias FROM table table (alias);\n>then it will dump as\n>\tcreate view foo as SELECT table.realcolname AS alias FROM table;\n>but there's no harm done. Better solution needed but I'll let Thomas\n>provide it.\n\nEXCELLENT!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 12:28:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "\nLong Overdue ;)\n\n\nOn Mon, 21 Feb 2000, Kaare Rasmussen wrote:\n\n> Having been away for some time I'm very anxious to see that there's a 7.0\n> release coming very soon. I extracted the TODO list from the CVS (latest update\n> February 9). The only really really big issue as I see it is referential\n> integrity. This is big, I admit but why going to 7.0 for this? Or is it because\n> it's long overdue (MSVC and stuff)?\n> \n> There are other things that must have taken a lot of work, only it's not\n> mainstream the same way referential integrity is (PL/Perl and more). I had\n> hoped to find outer joins, but look forward to the next release if it will be\n> there.\n> \n> Maybe I missed something in the TODO list or in the fixed list, but I couldn't\n> find VIEWs with UNIONs, which I understand would be solved by a rewrite of the\n> rules system.\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 21 Feb 2000 16:40:59 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TODO list / why 7.0 ?"
},
{
"msg_contents": "Kaare Rasmussen <[email protected]> writes:\n> This is big, I admit but why going to 7.0 for this? Or is it because\n> it's long overdue (MSVC and stuff)?\n\nA number of people thought 6.5 should have been called 7.0 because of\nMVCC. A number of other people thought that this release should be 6.6,\nand the next one (which should have outer joins and much better VIEWs\nthanks to a redesigned querytree representation) should be 7.0.\n\nI think it's kind of a compromise ;-).\n\nOTOH, if you look less at bullet points on a feature list and more at\nreliability and quality of implementation, there's plenty of material\nto argue that this indeed deserves to be 7.0. I think we have made\na quantum jump in our ability to understand and improve the Berkeley\ncode over the past year --- at least I have, maybe I shouldn't speak\nfor the other developers. There have been some pretty significant\nimprovements under-the-hood, and I think those are going to translate\ndirectly into a more reliable system.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 17:21:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list / why 7.0 ? "
},
{
"msg_contents": "\nWorking on the Release Announcement now ... Bruce, how accurate is the\ncurrent TODO list? If I go through it looking for all items marked as\n'-', I come up with the following list. Is it missing anything? I know\nnot *everything* has to be listed, so I'm more afraid of listing something\nthat shouldn't then not listing enough ...\n\nThe beta1.tar.gz snapshot has been created ... I'll put out an\nannouncement later tonight once I've heard back on this list, which also\ngives some of the mirror sites a chance to sync up, and Vince a chance to\nupdate the web site...\n\n=============================================\nRELIABILITY\n\nRESOURCES\n\n * -Disallow inherited columns with the same name as new columns\n * -Elog() does not free all its memory\n * -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n * -Recover or force failure when disk space is exhausted(Hiroshi)\n\nPARSER\n\n * -INSERT INTO ... SELECT with AS columns matching result columns problem\n * -Select a[1] FROM test fails, it needs test.a[1](Tom)\n * -Array index references without table name cause problems [array](Tom)\n * -INSERT ... SELECT ... GROUP BY groups by target columns not source\n columns(Tom)\n * -CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on\n INSERT(Tom)\n * -UNION with LIMIT fails\n * -CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n * -CREATE TABLE test(col char(2) DEFAULT user) fails in length\n restriction\n * -mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n * -SELECT ... UNION ... ORDER BY fails when sort expr not in result list,\n ORDER BY is applied only to the first SELECT\n * -select * from pg_class where oid in (0,-1)\n * -prevent primary key that exceeds max index columns [primary]\n * -SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n * -require SELECT DISTINCT target list to have all ORDER BY columns\n * -When using aggregates + GROUP BY, no rows in should yield no rows\n out(Tom)\n * -Allow HAVING to use comparisons that have no aggregates(Tom)\n * -Allow COUNT(DISTINCT col))(TOm)\n\nVIEWS\n\n * -Views with spaces in view name fail when referenced\n\nMISC\n\n * -User who can create databases can modify pg_database table(Peter E)\n * -Fix btree to give a useful elog when key > 1/2 (page - overhead)(Tom)\n * -pg_dump should preserve primary key information\n * -database names with spaces fail\n * -insert of 0.0 into DECIMAL(4,4) field fails(Tom)\n * -* Interlock to prevent DROP DATABASE on a database with running\n backendsInterlock to prevent DROP DATABASE on a database with running\n backends\n\nENHANCEMENTS\n\nURGENT\n\n * -Add referential integrity(Jan)[primary]\n * -Eliminate limits on query length\n * -Fix memory leak for aggregates(Tom)\n\nADMIN\n\n * -Better interface for adding to pg_group(Peter E)\n * -Generate postmaster pid file and remove flock/fcntl lock\n code[flock](Tatsuo)\n\nTYPES\n\n * -Add BIT, BIT VARYING\n * -Allow pg_descriptions when creating tables\n * -Allow pg_descriptions when creating types, columns, and functions\n * -Allow LOCALE to use indexes in regular expression searches(Tom)\n * -Allow array on int8[](Thomas)\n * -Add index on NUMERIC/DECIMAL type(Jan)\n * -Make Absolutetime/Relativetime int4 because time_t can be int8 on some\n ports\n * -Make type equivalency apply to aggregates\n\nINDEXES\n\n * -Permissions on indexes, prevent them(Peter E)\n * -Allow indexing of LIKE with localle character sets\n * -Allow indexing of more than eight columns\n\nCOMMANDS\n\n * -Add ALTER TABLE DROP/ALTER COLUMN feature(Peter E)\n * -Move LIKE index optimization handling to the optimizer(Tom)\n\nCLIENTS\n\n * -Allow flag to control COPY input/output of NULLs\n * -Allow psql \\copy to allow delimiters\n * -Add a function to return the last inserted oid, for use in psql\n scripts (Peter E)\n * -Allow psql to print nulls as distinct from \"\" [null]\n\nMISC\n\n * -Certain indexes will not shrink, i.e. oid indexes with many\n inserts(Vadim)\n * -Allow WHERE restriction on ctid(Hiroshi)\n * -Allow PQrequestCancel() to terminate when in waiting-for-lock state\n * -Allow subqueries in target list(Tom)\n * -Document/trigger/rule so changes to pgshadow recreate pgpwd\n [pg_shadow]\n * -Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n * -Add PL/Perl(Mark Hollomon)\n * -Add option for postgres user have a password by default(Peter E)\n\nPERFORMANCE\n\nFSYNC\n\n * -Prevent fsync in SELECT-only queries(Vadim)\n\nINDEXES\n\n * -Convert function(constant) into a constant for index use(Bernard\n Frankpitt)\n * -Make index creation use psort code, because it is now faster(Tom)\n * -Allow creation of sort temp tables > 1 Gig\n * -Create more system table indexes for faster cache lookups\n * -fix indexscan() so it does leak memory by not requiring caller to\n free(Tom)\n * -Improve btbinsrch() to handle equal keys better, remove\n btfirsteq()(Tom)\n * -Allow optimizer to prefer plans that match ORDER BY(Tom)\n\nCACHE\n\n * -elog() flushes cache, try invalidating just entries from current xact,\n perhaps using invalidation cache\n\nMISC\n\n * -Fix memory exhaustion when using many OR's [cnfify](Tom)\n * -Process const = const parts of OR clause in separate pass(Bernard\n Frankpitt)\n * -fix memory leak in cache code when non-existant table is referenced In\n WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n * -pass atttypmod through parser in more cases [atttypmod]\n * -remove duplicate type in/out functions for disk and net\n\nSOURCE CODE\n\n * -Add needed includes and removed unneeded include files(Bruce)\n * -Make configure --enable-debug add -g on compile line\n * -Pre-generate lex and yacc output so not required for install\n\n==================================================\nOn Mon, 21 Feb 2000, Tom Lane wrote:\n\n> Don Baccus <[email protected]> writes:\n> >> Think we'll have to\n> >> leave it unfixed till Thomas gets back.\n> \n> > That would be plenty of time to get it in for the real 7.0 release.\n> \n> I don't like shipping betas with broken pg_dump; that makes life\n> unreasonably difficult for beta testers, if we have to force another\n> initdb before release. So I put in a quick hack solution: don't print\n> the column alias list at all unless there is a table alias. This makes\n> the rule's FROM clause conform to ANSI syntax. If you actually did\n> write\n> \tcreate view foo as SELECT alias FROM table table (alias);\n> then it will dump as\n> \tcreate view foo as SELECT table.realcolname AS alias FROM table;\n> but there's no harm done. Better solution needed but I'll let Thomas\n> provide it.\n> \n> And now, it's 4:30 PM AST and we are outta here ... right Marc?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 21 Feb 2000 19:57:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n\n> * -Add referential integrity(Jan)[primary]\n\nThis is only PARTIALLY complete:\n\nMATCH FULL and MATCH <unspecified> foreign keys and their related\nreferential actions are complete. MATCH PARTIAL isn't in - I'll be\ndoing that for 7.1.\n\nIt doesn't check that the columns referenced in a foreign key\nform a primary key or are contrained by UNIQUE in the referenced\ntable. This will be checked in 7.1, not sure who will do it (who\never gets to it first, probably).\n\nThose are the two major user-visible loose ends with this feature.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 16:34:48 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Should be accurate. I usually make a release list after the feature\nfreeze/beta starts.\n\n> \n> Working on the Release Announcement now ... Bruce, how accurate is the\n> current TODO list? If I go through it looking for all items marked as\n> '-', I come up with the following list. Is it missing anything? I know\n> not *everything* has to be listed, so I'm more afraid of listing something\n> that shouldn't then not listing enough ...\n> \n> The beta1.tar.gz snapshot has been created ... I'll put out an\n> announcement later tonight once I've heard back on this list, which also\n> gives some of the mirror sites a chance to sync up, and Vince a chance to\n> update the web site...\n> \n> =============================================\n> RELIABILITY\n> \n> RESOURCES\n> \n> * -Disallow inherited columns with the same name as new columns\n> * -Elog() does not free all its memory\n> * -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n> * -Recover or force failure when disk space is exhausted(Hiroshi)\n> \n> PARSER\n> \n> * -INSERT INTO ... SELECT with AS columns matching result columns problem\n> * -Select a[1] FROM test fails, it needs test.a[1](Tom)\n> * -Array index references without table name cause problems [array](Tom)\n> * -INSERT ... SELECT ... GROUP BY groups by target columns not source\n> columns(Tom)\n> * -CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on\n> INSERT(Tom)\n> * -UNION with LIMIT fails\n> * -CREATE TABLE x AS SELECT 1 UNION SELECT 2 fails\n> * -CREATE TABLE test(col char(2) DEFAULT user) fails in length\n> restriction\n> * -mismatched types in CREATE TABLE ... DEFAULT causes problems [default]\n> * -SELECT ... UNION ... ORDER BY fails when sort expr not in result list,\n> ORDER BY is applied only to the first SELECT\n> * -select * from pg_class where oid in (0,-1)\n> * -prevent primary key that exceeds max index columns [primary]\n> * -SELECT COUNT('asdf') FROM pg_class WHERE oid=12 crashes\n> * -require SELECT DISTINCT target list to have all ORDER BY columns\n> * -When using aggregates + GROUP BY, no rows in should yield no rows\n> out(Tom)\n> * -Allow HAVING to use comparisons that have no aggregates(Tom)\n> * -Allow COUNT(DISTINCT col))(TOm)\n> \n> VIEWS\n> \n> * -Views with spaces in view name fail when referenced\n> \n> MISC\n> \n> * -User who can create databases can modify pg_database table(Peter E)\n> * -Fix btree to give a useful elog when key > 1/2 (page - overhead)(Tom)\n> * -pg_dump should preserve primary key information\n> * -database names with spaces fail\n> * -insert of 0.0 into DECIMAL(4,4) field fails(Tom)\n> * -* Interlock to prevent DROP DATABASE on a database with running\n> backendsInterlock to prevent DROP DATABASE on a database with running\n> backends\n> \n> ENHANCEMENTS\n> \n> URGENT\n> \n> * -Add referential integrity(Jan)[primary]\n> * -Eliminate limits on query length\n> * -Fix memory leak for aggregates(Tom)\n> \n> ADMIN\n> \n> * -Better interface for adding to pg_group(Peter E)\n> * -Generate postmaster pid file and remove flock/fcntl lock\n> code[flock](Tatsuo)\n> \n> TYPES\n> \n> * -Add BIT, BIT VARYING\n> * -Allow pg_descriptions when creating tables\n> * -Allow pg_descriptions when creating types, columns, and functions\n> * -Allow LOCALE to use indexes in regular expression searches(Tom)\n> * -Allow array on int8[](Thomas)\n> * -Add index on NUMERIC/DECIMAL type(Jan)\n> * -Make Absolutetime/Relativetime int4 because time_t can be int8 on some\n> ports\n> * -Make type equivalency apply to aggregates\n> \n> INDEXES\n> \n> * -Permissions on indexes, prevent them(Peter E)\n> * -Allow indexing of LIKE with localle character sets\n> * -Allow indexing of more than eight columns\n> \n> COMMANDS\n> \n> * -Add ALTER TABLE DROP/ALTER COLUMN feature(Peter E)\n> * -Move LIKE index optimization handling to the optimizer(Tom)\n> \n> CLIENTS\n> \n> * -Allow flag to control COPY input/output of NULLs\n> * -Allow psql \\copy to allow delimiters\n> * -Add a function to return the last inserted oid, for use in psql\n> scripts (Peter E)\n> * -Allow psql to print nulls as distinct from \"\" [null]\n> \n> MISC\n> \n> * -Certain indexes will not shrink, i.e. oid indexes with many\n> inserts(Vadim)\n> * -Allow WHERE restriction on ctid(Hiroshi)\n> * -Allow PQrequestCancel() to terminate when in waiting-for-lock state\n> * -Allow subqueries in target list(Tom)\n> * -Document/trigger/rule so changes to pgshadow recreate pgpwd\n> [pg_shadow]\n> * -Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n> * -Add PL/Perl(Mark Hollomon)\n> * -Add option for postgres user have a password by default(Peter E)\n> \n> PERFORMANCE\n> \n> FSYNC\n> \n> * -Prevent fsync in SELECT-only queries(Vadim)\n> \n> INDEXES\n> \n> * -Convert function(constant) into a constant for index use(Bernard\n> Frankpitt)\n> * -Make index creation use psort code, because it is now faster(Tom)\n> * -Allow creation of sort temp tables > 1 Gig\n> * -Create more system table indexes for faster cache lookups\n> * -fix indexscan() so it does leak memory by not requiring caller to\n> free(Tom)\n> * -Improve btbinsrch() to handle equal keys better, remove\n> btfirsteq()(Tom)\n> * -Allow optimizer to prefer plans that match ORDER BY(Tom)\n> \n> CACHE\n> \n> * -elog() flushes cache, try invalidating just entries from current xact,\n> perhaps using invalidation cache\n> \n> MISC\n> \n> * -Fix memory exhaustion when using many OR's [cnfify](Tom)\n> * -Process const = const parts of OR clause in separate pass(Bernard\n> Frankpitt)\n> * -fix memory leak in cache code when non-existant table is referenced In\n> WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n> * -pass atttypmod through parser in more cases [atttypmod]\n> * -remove duplicate type in/out functions for disk and net\n> \n> SOURCE CODE\n> \n> * -Add needed includes and removed unneeded include files(Bruce)\n> * -Make configure --enable-debug add -g on compile line\n> * -Pre-generate lex and yacc output so not required for install\n> \n> ==================================================\n> On Mon, 21 Feb 2000, Tom Lane wrote:\n> \n> > Don Baccus <[email protected]> writes:\n> > >> Think we'll have to\n> > >> leave it unfixed till Thomas gets back.\n> > \n> > > That would be plenty of time to get it in for the real 7.0 release.\n> > \n> > I don't like shipping betas with broken pg_dump; that makes life\n> > unreasonably difficult for beta testers, if we have to force another\n> > initdb before release. So I put in a quick hack solution: don't print\n> > the column alias list at all unless there is a table alias. This makes\n> > the rule's FROM clause conform to ANSI syntax. If you actually did\n> > write\n> > \tcreate view foo as SELECT alias FROM table table (alias);\n> > then it will dump as\n> > \tcreate view foo as SELECT table.realcolname AS alias FROM table;\n> > but there's no harm done. Better solution needed but I'll let Thomas\n> > provide it.\n> > \n> > And now, it's 4:30 PM AST and we are outta here ... right Marc?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 19:53:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n> \n> > * -Add referential integrity(Jan)[primary]\n> \n> This is only PARTIALLY complete:\n> \n> MATCH FULL and MATCH <unspecified> foreign keys and their related\n> referential actions are complete. MATCH PARTIAL isn't in - I'll be\n> doing that for 7.1.\n\nAdded to TODO.\n\n> It doesn't check that the columns referenced in a foreign key\n> form a primary key or are contrained by UNIQUE in the referenced\n> table. This will be checked in 7.1, not sure who will do it (who\n> ever gets to it first, probably).\n\nAdded. \n\n* Foreign key does not check that columns referenced form a primary key \n or constrained by UNIQUE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 21 Feb 2000 19:57:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n> \n> > * -Add referential integrity(Jan)[primary]\n> \n> This is only PARTIALLY complete:\n> \n> MATCH FULL and MATCH <unspecified> foreign keys and their related\n> referential actions are complete. MATCH PARTIAL isn't in - I'll be\n> doing that for 7.1.\n> \n> It doesn't check that the columns referenced in a foreign key\n> form a primary key or are contrained by UNIQUE in the referenced\n> table. This will be checked in 7.1, not sure who will do it (who\n> ever gets to it first, probably).\n> \n> Those are the two major user-visible loose ends with this feature.\n\nWhat about ALTER TABLE table DROP CONSTRAINT? I see this:\n\nalter table t1 drop constraint t1_fk cascade;\nERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n\nNote that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 22 Feb 2000 13:45:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> > \n> > Those are the two major user-visible loose ends with this feature.\n> \n> What about ALTER TABLE table DROP CONSTRAINT? I see this:\n> \n> alter table t1 drop constraint t1_fk cascade;\n> ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> \n> Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n\nI thought that was going in. According to Marc, if it sufficiently\nwarned users, and required them to type it twice, it would do Peter's\nalter table code. Perhaps Peter decided to wait for 7.1?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 00:09:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 01:45 PM 2/22/00 +0900, Tatsuo Ishii wrote:\n\n>> Those are the two major user-visible loose ends with this feature.\n\n>What about ALTER TABLE table DROP CONSTRAINT? I see this:\n>\n>alter table t1 drop constraint t1_fk cascade;\n>ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n>\n>Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n\n\"ALTER TABLE ... DROP CONSTRAINT\" I view as being a more general\nproblem not simply restricted to referential integrity. My comment\nwas meant to be strictly interpreted within the realm of RI. Obviously,\ngeneral dropping of columns and constraints needs to be solved, but these\naren't RI issues specifically.\n\nAnd, no, you don't have ALTER TABLE ... ADD CONSTRAINT. What you have\nis the ability to add foreign key constraints only. When this was\nadded, we (Stephan Szabo, myself, and Jan Wieck) discussed doing\ngeneral constraints, too, but Jan pointed out that we were all busy\nwith RI-specific stuff and that we should concentrate on those issue.\nA good call, IMO, as I was buried in trying to understand \"NO ACTION\"\nand \"MATCH <unspecified>\" at the same; Stephan was working on pg_dump;\nand Jan was really busy with his real job. I only had one weekend to\npour into implementing the proper semantics for the RI triggers, and\nas a result of our decision to concentrate on RI-specific issues was\nable to complete the necessary work for fully SQL92 compliant \"MATCH\n<unspecified>\" foreign keys.\n\nHowever, Stephan's ALTER TABLE ... work to allow you to add foreign\nkeys should be fairly easy to extend to general constraints, he and\nJan discussed this a couple of weeks ago.\n\n7.1 would seem to be the likely target for this.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 21 Feb 2000 21:15:05 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n\n> I thought that was going in. According to Marc, if it sufficiently\n> warned users, and required them to type it twice, it would do Peter's\n> alter table code. Perhaps Peter decided to wait for 7.1?\n\nI thought the rest of us beat him up until he took it out ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 00:31:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> \"ALTER TABLE ... DROP CONSTRAINT\" I view as being a more general\n> problem not simply restricted to referential integrity. My comment\n> was meant to be strictly interpreted within the realm of RI. Obviously,\n> general dropping of columns and constraints needs to be solved, but these\n> aren't RI issues specifically.\n\nThat's ok, as long as stated somewhere in TODO or whatever.\n\n> And, no, you don't have ALTER TABLE ... ADD CONSTRAINT. What you have\n> is the ability to add foreign key constraints only. When this was\n\nThis is more than ok:-) Since without ADD CONSTRAINTS, we could not\ndefine a circular referential integrity at all. Good job!\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 22 Feb 2000 14:33:59 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "> A number of people thought 6.5 should have been called 7.0 because of\n> MVCC. A number of other people thought that this release should be 6.6,\n\nYou know, I actually woke in the middle of the night and said to myself, Why\ndid you call MVCC for MSVC. My only answer is that it was late, after a 16\nhours work day.\n\n> and the next one (which should have outer joins and much better VIEWs\n> thanks to a redesigned querytree representation) should be 7.0.\n\nCan't wait for this one. If you throw large objects in also, let's go straight\nto 8.0 :-)\n\n> OTOH, if you look less at bullet points on a feature list and more at\n> reliability and quality of implementation, there's plenty of material\n\nI didn't try to pick on the development or the state of PostgreSQL. I'm\nimpressed with the current speed of development and also the number of new\npeople joining in (you, Peter Eisentraut, maybe more) that relatively easy\nunderstands and becomes productive.\n",
"msg_date": "Tue, 22 Feb 2000 06:40:25 +0100",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list / why 7.0 ?"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> \n> > I thought that was going in. According to Marc, if it sufficiently\n> > warned users, and required them to type it twice, it would do Peter's\n> > alter table code. Perhaps Peter decided to wait for 7.1?\n> \n> I thought the rest of us beat him up until he took it out ;-)\n\nYes, he was badly beaten up about it, but I felt that the code as is was\npretty good, considering how bad CLUSTER is. If people are told the\nlimitations, it could be a win.\n\nI felt that the more advanced features like not using 2x disk space were\nquite hard to implement, considering the other TODO items. Marc agreed\nand was going to e-mail him to tell him that with proper user warning,\nwe wanted the patch.\n\nDo people disagree?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 00:43:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I felt that the more advanced features like not using 2x disk space were\n> quite hard to implement, considering the other TODO items. Marc agreed\n> and was going to e-mail him to tell him that with proper user warning,\n> we wanted the patch.\n\n> Do people disagree?\n\nHmmm ... well ... I really don't want to restart that argument, but\nI thought the plurality of opinion was that we didn't want it until\na more complete implementation could be provided.\n\nCertainly I'm not enthused about shoehorning it in *after* we've\ngone to feature-freeze mode. If beta means anything around here,\nit means \"you missed the bus for adding features\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 00:53:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Don Baccus wrote:\n> \n> At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n> \n> > * -Add referential integrity(Jan)[primary]\n> \n> This is only PARTIALLY complete:\n> \n> MATCH FULL and MATCH <unspecified> foreign keys and their related\n> referential actions are complete. MATCH PARTIAL isn't in - I'll be\n> doing that for 7.1.\n\nCan anyone point me to a written description of the expected\nfunctionality (and maybe limitations) provided by this release of RI? \nI'm not asking for a definition of RI, but rather the status of\n*current* (7.0) pgsql RI functionality, i.e., what one should\nexpect...\n\nCheers,\nEd Loehr\n",
"msg_date": "Tue, 22 Feb 2000 00:34:01 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Working on the Release Announcement now ...\n\n> * -SELECT ... UNION ... ORDER BY fails when sort expr not in result list,\n> ORDER BY is applied only to the first SELECT\n\nThis is still broken AFAIK. Not sure how it got marked as done.\n\n> * -Make type equivalency apply to aggregates\n\nIIRC, Peter should get the credit for that one.\n\n> * -Certain indexes will not shrink, i.e. oid indexes with many\n> inserts(Vadim)\n\nAFAIK that isn't done either.\n\n> * -Document/trigger/rule so changes to pgshadow recreate pgpwd\n> [pg_shadow]\n\nPeter's work also...\n\n> * -fix memory leak in cache code when non-existant table is referenced In\n> WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n\nThis looks like 2 items got merged somehow. AFAIK only the first is\ndone.\n\n\nLooking at my own notes about completed changes, it sure seems like\nthere have been one heck of a lot of bugfixes and performance\nimprovements that don't correspond to anything on the official TODO\nlist. It might be worth making some opening remarks along that line\nrather than just presenting the checked-off items.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 01:57:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> \n> > I thought that was going in. According to Marc, if it sufficiently\n> > warned users, and required them to type it twice, it would do Peter's\n> > alter table code. Perhaps Peter decided to wait for 7.1?\n> \n> I thought the rest of us beat him up until he took it out ;-)\n> \n\nWas'nt that DROP COLUMN ? \n\nIs'nt DROP CONSTRAINT something completely different ?\n\nAFAIK constraints are not (should not;) be implemented as extra columns, \neven though they look like it in CREATE TABLE clause.\n\nSo removing them would actually mean deleting some rows from some system \ntable(s). And you don't even have to check the validity of existing data as \nhave to when doing ADD CONSTRAINT.\n\n----------------\nHannu\n",
"msg_date": "Tue, 22 Feb 2000 12:14:39 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "\nOn Mon, 21 Feb 2000, Bruce Momjian wrote:\n\n> Should be accurate. I usually make a release list after the feature\n> freeze/beta starts.\n\n I'am not sure that TODO is accurate. The 7.0 has non-TODO (small) features \ntoo --> The Oracle compatible TO_CHAR() for date/time, numbers \n(int/float/numeric) formatting and (reverse) TO_DATE() / TO_TIMESTAMP() /\nTO_NUMBER() for string to number or data/time conversion. \n\nNumber part (TO_NUMBER() and TO_CHAR()) support locale and allow you convert\nnumber to locale-like number. \n\n\t\t\t\t\t\t\tKarel\n\nPS. for exact changes: \"diff -r --new-file 6.5.x 7.0\" :-))\n\n",
"msg_date": "Tue, 22 Feb 2000 11:26:11 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > What about ALTER TABLE table DROP CONSTRAINT? I see this:\n\n> > Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n\n> Perhaps Peter decided to wait for 7.1?\n\nYes and no. I never had anything like this. I was afraid to get crossed up\nwith Jan. Anyway, to add/drop unique constraints create/drop the index. To\nadd/drop foreign keys, use create/drop constraint trigger(????). To\nadd/drop check contraints you're on your own. Not so bad all in all.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 12:58:52 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Mon, 21 Feb 2000, The Hermit Hacker wrote:\n\n> * -Add BIT, BIT VARYING\n\nThis is currently suffering from BIT rot in contrib. Not really usable.\nAnd we can't squeeze it in until the bootstrap scanner recognizes tokens\nwith spaces in it. (Does it?)\n\n> * -Add ALTER TABLE DROP/ALTER COLUMN feature(Peter E)\n\nSince there seems to be some confusion here: What currently exists all\ndone is ALTER TABLE ALTER COLUMN (which allows you to set and drop\ndefaults). What does not exist is DROP COLUMN and ADD/DROP CONTRAINT in\nits full glory.\n\n\nIf someone cares for accuracy, I also did these:\n\n> * -pg_dump should preserve primary key information\n> * -Allow flag to control COPY input/output of NULLs\n> * -Allow psql \\copy to allow delimiters\n> * -Allow psql to print nulls as distinct from \"\" [null]\n> * -Make configure --enable-debug add -g on compile line\n> * -Pre-generate lex and yacc output so not required for install\n> * -Make Absolutetime/Relativetime int4 because time_t can be int8 on some ports\n> * -Make type equivalency apply to aggregates\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 13:07:29 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "\n>Don Baccus wrote:\n>> \n>> At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n>> \n>> > * -Add referential integrity(Jan)[primary]\n>> \n>> This is only PARTIALLY complete:\n>> \n>> MATCH FULL and MATCH <unspecified> foreign keys and their related\n>> referential actions are complete. MATCH PARTIAL isn't in - I'll be\n>> doing that for 7.1.\n>\n>Can anyone point me to a written description of the expected\n>functionality (and maybe limitations) provided by this release of RI? \n>I'm not asking for a definition of RI, but rather the status of\n>*current* (7.0) pgsql RI functionality, i.e., what one should\n>expect...\n\nWell, I have some documentation patches currently out for the people\nin the FK project, but we haven't gotten that completely finished,\nand there are a few subtle differences right now due to some stuff\nthat we weren't able to get finished, in the meantime, while we're\nworking on that, I believe the following should sum it up:\n\n* You can make both column and table constraints for foreign key\n constraints. Currently, column FK constraints may not \n specify NOT DEFERRABLE or INITIALLY (DEFERRED|IMMEDIATE)\n due to shift/reduce problems in the parser.\n\n* You can only specify MATCH FULL or use MATCH unspecified for \n the matching types. MATCH PARTIAL should be in 7.1.\n\n* If you do not specify the referenced columns, it will look for\n the primary key on the referenced table, but if you specify\n referenced columns, it will not guarantee that those columns\n actually are a foreign key or have a unique constraint upon \n them.\n\n* It does not enforce uniqueness of constraint names. (A big\n reason that I didn't also due an FK version of ALTER TABLE\n DROP CONSTRAINT) Theoretically the constraint names for\n unique, check and fk constraints must all be checked to\n guarantee uniqueness. Also, constraint names made by the\n system must also not conflict with existing constraint\n names, and probably should not fail, but instead have\n a way of forcing a unique name.\n\n* ALTER TABLE ADD CONSTRAINT will allow the adding of any\n foreign key constraint that would be legal in the \n table constraint context (hopefully). It also checks\n the current table data and refuses to add the constraint\n if the constraint would be immediately violated (again\n hopefully -- it's worked for our tests, but let's see\n what happens in the real world).\n\n* pg_dump will dump CREATE CONSTRAINT TRIGGER statements for\n tables with foreign key constraints. In data only dumps,\n pg_dump does a little bit of magic with the system catalogs\n to turn off all triggers on user defined tables and turn\n them back on at the end. It currently does not enforce\n that the data in between does not violate the constraint.\n This is unfortunate, but we didn't come up with a good\n way to do this for possibly circular fk constraints and\n still be able to deal with the possibility that the user\n may have changed the constraints since the dump, since\n it's data-only.\n\n[Anything you can think of to add to this Don?]\n\nStephan\n\n",
"msg_date": "Tue, 22 Feb 2000 07:15:20 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "\n>> At 07:57 PM 2/21/00 -0400, The Hermit Hacker wrote:\n>> \n>> > * -Add referential integrity(Jan)[primary]\n>> \n>> This is only PARTIALLY complete:\n>> \n>> MATCH FULL and MATCH <unspecified> foreign keys and their related\n>> referential actions are complete. MATCH PARTIAL isn't in - I'll be\n>> doing that for 7.1.\n>> \n>> It doesn't check that the columns referenced in a foreign key\n>> form a primary key or are contrained by UNIQUE in the referenced\n>> table. This will be checked in 7.1, not sure who will do it (who\n>> ever gets to it first, probably).\n>> \n>> Those are the two major user-visible loose ends with this feature.\n>\n>What about ALTER TABLE table DROP CONSTRAINT? I see this:\n>\n>alter table t1 drop constraint t1_fk cascade;\n>ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n>\n>Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n\nI looked at drop constraint for the foreign key case while I was doing\nadd constraint, but right now the system doesn't generate unique \nconstraint names (although it should) so drop constraint could be\ndangerous if you're not careful. Yeah, let me drop this RI constraint\n'<unknown>' that I just created... oops... And unfortunately, to be \nreally compliant, all of the constraint names have to be unique, and\nI really didn't want to hack something in that was going to make it\nharder to do it right later. \n\nStephan\n",
"msg_date": "Tue, 22 Feb 2000 07:21:28 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "On Tue, 22 Feb 2000 [email protected] wrote:\n\n> * pg_dump will dump CREATE CONSTRAINT TRIGGER statements for\n> tables with foreign key constraints. In data only dumps,\n> pg_dump does a little bit of magic with the system catalogs\n> to turn off all triggers on user defined tables and turn\n> them back on at the end.\n\nWhatever happened to the idea of a SET command for this? IIRC, SQL already\nhas a contraint related set command (for deferring, etc.). Why not\noverload that to turn off foreign keys? I could imagine that being useful\nfor people developing database schemas.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 13:49:41 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > > \n> > > Those are the two major user-visible loose ends with this feature.\n> > \n> > What about ALTER TABLE table DROP CONSTRAINT? I see this:\n> > \n> > alter table t1 drop constraint t1_fk cascade;\n> > ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> > \n> > Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n> \n> I thought that was going in. According to Marc, if it sufficiently\n> warned users, and required them to type it twice, it would do Peter's\n> alter table code. Perhaps Peter decided to wait for 7.1?\n\nUmmm...I don't recall ever talking about DROP CONTRAINT *raised eyebrow*\n\n\n",
"msg_date": "Tue, 22 Feb 2000 09:09:47 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > >> ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> > \n> > > I thought that was going in. According to Marc, if it sufficiently\n> > > warned users, and required them to type it twice, it would do Peter's\n> > > alter table code. Perhaps Peter decided to wait for 7.1?\n> > \n> > I thought the rest of us beat him up until he took it out ;-)\n> \n> Yes, he was badly beaten up about it, but I felt that the code as is was\n> pretty good, considering how bad CLUSTER is. If people are told the\n> limitations, it could be a win.\n\ngod, I hate this argument: we did it badly for CLUSTER, so its okay to do\nit badly here too :(\n\n> I felt that the more advanced features like not using 2x disk space were\n> quite hard to implement, considering the other TODO items. Marc agreed\n> and was going to e-mail him to tell him that with proper user warning,\n> we wanted the patch.\n\n\"agreed\" is a weak word in this sense ... :)\n\n> Do people disagree?\n\nI don't like it, and think with some effort, it could be done better, and\nwill stick with that ... but if \"its the best that can be done\" ...\n*shrug*\n\nBut, after 7.0 is released ... I still believe that the outstanding issues\nwere such that putting it into 7.0 was a bad thing ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 22 Feb 2000 09:12:35 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Kaare Rasmussen wrote:\n\n> Can't wait for this one. If you throw large objects in also, let's go straight\n> to 8.0 :-)\n\nIMHO, 8.0 should be reserved for the first SQL Entry level (direct entry)\ncompliant release. A recent survey lead me to believe that, if we really\nmake a push, this is only two or three releases away.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 22 Feb 2000 14:14:23 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TODO list / why 7.0 ?"
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > Working on the Release Announcement now ...\n> \n> > * -SELECT ... UNION ... ORDER BY fails when sort expr not in result list,\n> > ORDER BY is applied only to the first SELECT\n> \n> This is still broken AFAIK. Not sure how it got marked as done.\n\nNot marked as done on my copy.\n\n> \n> > * -Make type equivalency apply to aggregates\n> \n> IIRC, Peter should get the credit for that one.\n\nAdded.\n\n> \n> > * -Certain indexes will not shrink, i.e. oid indexes with many\n> > inserts(Vadim)\n> \n> AFAIK that isn't done either.\n> \n\nFixed.\n\n> > * -Document/trigger/rule so changes to pgshadow recreate pgpwd\n> > [pg_shadow]\n\nAdded.\n\n> \n> Peter's work also...\n> \n> > * -fix memory leak in cache code when non-existant table is referenced In\n> > WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n> \n> This looks like 2 items got merged somehow. AFAIK only the first is\n> done.\n\nSplit.\n\n> \n> \n> Looking at my own notes about completed changes, it sure seems like\n> there have been one heck of a lot of bugfixes and performance\n> improvements that don't correspond to anything on the official TODO\n> list. It might be worth making some opening remarks along that line\n> rather than just presenting the checked-off items.\n\nYes, that is what I will do by going through CVS. It is better for Marc\nto just specify the release and wait for my full release blurb coming in\na few days.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 09:07:11 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> \n> > > What about ALTER TABLE table DROP CONSTRAINT? I see this:\n> \n> > > Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n> \n> > Perhaps Peter decided to wait for 7.1?\n> \n\n\nI was speaking off DROP COLUMN. Sorry to have added to the confusion.\n\n\n> Yes and no. I never had anything like this. I was afraid to get crossed up\n> with Jan. Anyway, to add/drop unique constraints create/drop the index. To\n> add/drop foreign keys, use create/drop constraint trigger(????). To\n> add/drop check contraints you're on your own. Not so bad all in all.\n> \n> -- \n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 09:11:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> On Mon, 21 Feb 2000, The Hermit Hacker wrote:\n> \n> > * -Add BIT, BIT VARYING\n> \n> This is currently suffering from BIT rot in contrib. Not really usable.\n> And we can't squeeze it in until the bootstrap scanner recognizes tokens\n> with spaces in it. (Does it?)\n\nAw man, I promised to put that into the main tree. Is it not usable? \nSpaces?\n\n> \n> > * -Add ALTER TABLE DROP/ALTER COLUMN feature(Peter E)\n> \n> Since there seems to be some confusion here: What currently exists all\n> done is ALTER TABLE ALTER COLUMN (which allows you to set and drop\n> defaults). What does not exist is DROP COLUMN and ADD/DROP CONTRAINT in\n> its full glory.\n> \n> \n> If someone cares for accuracy, I also did these:\n> \n> > * -pg_dump should preserve primary key information\n> > * -Allow flag to control COPY input/output of NULLs\n> > * -Allow psql \\copy to allow delimiters\n> > * -Allow psql to print nulls as distinct from \"\" [null]\n> > * -Make configure --enable-debug add -g on compile line\n> > * -Pre-generate lex and yacc output so not required for install\n> > * -Make Absolutetime/Relativetime int4 because time_t can be int8 on some ports\n> > * -Make type equivalency apply to aggregates\n\nTODO updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 09:15:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> \n> > > > \n> > > > Those are the two major user-visible loose ends with this feature.\n> > > \n> > > What about ALTER TABLE table DROP CONSTRAINT? I see this:\n> > > \n> > > alter table t1 drop constraint t1_fk cascade;\n> > > ERROR: ALTER TABLE / DROP CONSTRAINT is not implemented\n> > > \n> > > Note that we seem to have ALTER TABLE table ADD CONSTRAINT, though.\n> > \n> > I thought that was going in. According to Marc, if it sufficiently\n> > warned users, and required them to type it twice, it would do Peter's\n> > alter table code. Perhaps Peter decided to wait for 7.1?\n> \n> Ummm...I don't recall ever talking about DROP CONTRAINT *raised eyebrow*\n\nAgain, I am a goof. I was thinking of DROP COLUMN.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 09:18:02 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 02:33 PM 2/22/00 +0900, Tatsuo Ishii wrote:\n\n>> And, no, you don't have ALTER TABLE ... ADD CONSTRAINT. What you have\n>> is the ability to add foreign key constraints only. When this was\n>\n>This is more than ok:-) Since without ADD CONSTRAINTS, we could not\n>define a circular referential integrity at all. Good job!\n\nStephan Szabo did that particular piece of work, and, yeah, good job\nindeed, Stephan!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 06:43:32 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "At 12:34 AM 2/22/00 -0600, Ed Loehr wrote:\n\n>Can anyone point me to a written description of the expected\n>functionality (and maybe limitations) provided by this release of RI? \n>I'm not asking for a definition of RI, but rather the status of\n>*current* (7.0) pgsql RI functionality, i.e., what one should\n>expect...\n\nJan was working on docs, and I think maybe Stephan Szabo? But Jan\nseems to have dropped out of site, again - total immersion at work\nis my diagnosis. I actually enjoy doing documentation but I'm swamped\nat the moment, too.\n\nIn short...if you read Date's SQL primer, all foreign key functionality\nis there EXCEPT \"MATCH PARTIAL\" and the check on the target columns being\nconstrained UNIQUE or PRIMARY KEY. We also need to assign a unique name\nto the foreign key constraint if you don't give one (right now they're\njust listed \"unnamed\") because otherwise you don't know which of several\nforeign key constraints failed unless you explicitly name them yourself.\n\n(I forgot that one in my previous note).\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 06:55:43 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> \n> Yes, that is what I will do by going through CVS. It is better for Marc\n> to just specify the release and wait for my full release blurb coming in\n> a few days.\n\n'K, will do that ... I also wanted to give a day for the miror sites to\npick up the beta ...\n\n\n",
"msg_date": "Tue, 22 Feb 2000 11:02:42 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 12:58 PM 2/22/00 +0100, Peter Eisentraut wrote:\n\n>Yes and no. I never had anything like this. I was afraid to get crossed up\n>with Jan. Anyway, to add/drop unique constraints create/drop the index. To\n>add/drop foreign keys, use create/drop constraint trigger(????).\n\nTo add a foreign key try \"alter table foo add foreign key(column)\nreferences bar\".\n\nYou'll like what you see.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 07:09:54 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "At 07:15 AM 2/22/00 -0500, [email protected] wrote:\n\n>* You can make both column and table constraints for foreign key\n> constraints. Currently, column FK constraints may not \n> specify NOT DEFERRABLE or INITIALLY (DEFERRED|IMMEDIATE)\n> due to shift/reduce problems in the parser.\n\nFixed by Thomas, I believe...though I've not tested it myself.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 07:12:40 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "On Tue, 22 Feb 2000, The Hermit Hacker wrote:\n\n> On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> \n> > \n> > Yes, that is what I will do by going through CVS. It is better for Marc\n> > to just specify the release and wait for my full release blurb coming in\n> > a few days.\n> \n> 'K, will do that ... I also wanted to give a day for the miror sites to\n> pick up the beta ...\n\nAnyone have a **SHORT** list of highlights for the initial website\nannouncement?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 22 Feb 2000 10:46:06 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> \n> > \n> > Yes, that is what I will do by going through CVS. It is better for Marc\n> > to just specify the release and wait for my full release blurb coming in\n> > a few days.\n> \n> 'K, will do that ... I also wanted to give a day for the miror sites to\n> pick up the beta ...\n> \n\nThe cvs logs since 6.5.0 are 108k lines, and the merged file with\nduplicates removed is 21k lines. Man, that's big.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 10:51:47 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Working on that now. Give me until the end of the day. I will have the\nusual paragraphs and a long list. Looks like no billable work today.\n\n\n> On Tue, 22 Feb 2000, The Hermit Hacker wrote:\n> \n> > On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> > \n> > > \n> > > Yes, that is what I will do by going through CVS. It is better for Marc\n> > > to just specify the release and wait for my full release blurb coming in\n> > > a few days.\n> > \n> > 'K, will do that ... I also wanted to give a day for the miror sites to\n> > pick up the beta ...\n> \n> Anyone have a **SHORT** list of highlights for the initial website\n> announcement?\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 11:24:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> Working on that now. Give me until the end of the day. I will have the\n> usual paragraphs and a long list. Looks like no billable work today.\n\nOk. I'm using Marc's announcement for the announcements and news, when\nI get yours I'll replace the one on news and put in a pointer.\n\nVince.\n\n> \n> \n> > On Tue, 22 Feb 2000, The Hermit Hacker wrote:\n> > \n> > > On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> > > \n> > > > \n> > > > Yes, that is what I will do by going through CVS. It is better for Marc\n> > > > to just specify the release and wait for my full release blurb coming in\n> > > > a few days.\n> > > \n> > > 'K, will do that ... I also wanted to give a day for the miror sites to\n> > > pick up the beta ...\n> > \n> > Anyone have a **SHORT** list of highlights for the initial website\n> > announcement?\n> > \n> > Vince.\n> > -- \n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n> > 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> > \n> > \n> > \n> > \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 22 Feb 2000 11:35:11 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On 2000-02-22, Bruce Momjian mentioned:\n\n> > On Mon, 21 Feb 2000, The Hermit Hacker wrote:\n> > \n> > > * -Add BIT, BIT VARYING\n> > \n> > This is currently suffering from BIT rot in contrib. Not really usable.\n> > And we can't squeeze it in until the bootstrap scanner recognizes tokens\n> > with spaces in it. (Does it?)\n> \n> Aw man, I promised to put that into the main tree. Is it not usable? \n> Spaces?\n\nSomehow you have to do something similar to\ninsert OID = 9999 ( bit varying PGUID 1 1 t b t \\054 0 0 bitvaryingin ... )\n\nAnd no, naming the type bit_varying internally is not an acceptable\nanswer. ;) We might want to start thinking about this item before national\ncharacter comes our way. (Or just document the solution, if it already\nexists.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 23 Feb 2000 02:20:49 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-02-22, Bruce Momjian mentioned:\n> \n> > > On Mon, 21 Feb 2000, The Hermit Hacker wrote:\n> > > \n> > > > * -Add BIT, BIT VARYING\n> > > \n> > > This is currently suffering from BIT rot in contrib. Not really usable.\n> > > And we can't squeeze it in until the bootstrap scanner recognizes tokens\n> > > with spaces in it. (Does it?)\n> > \n> > Aw man, I promised to put that into the main tree. Is it not usable? \n> > Spaces?\n> \n> Somehow you have to do something similar to\n> insert OID = 9999 ( bit varying PGUID 1 1 t b t \\054 0 0 bitvaryingin ... )\n> \n> And no, naming the type bit_varying internally is not an acceptable\n> answer. ;) We might want to start thinking about this item before national\n> character comes our way. (Or just document the solution, if it already\n> exists.)\n\nHuh, I still don't get it. What is the matter with that insert?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 20:30:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Somehow you have to do something similar to\n>> insert OID = 9999 ( bit varying PGUID 1 1 t b t \\054 0 0 bitvaryingin ... )\n\n> Huh, I still don't get it. What is the matter with that insert?\n\nThe space in the type name is gonna confuse things.\n\n>> And no, naming the type bit_varying internally is not an acceptable\n>> answer. ;) We might want to start thinking about this item before national\n>> character comes our way. (Or just document the solution, if it already\n>> exists.)\n\nAFAICS the solution would have to be similar to what we already do for\nCHARACTER VARYING: parse the type declaration specially in gram.y,\nand translate it to an internal type name.\n\ngram.y already knows about NATIONAL CHARACTER [ VARYING ] too, BTW.\nSeems to just translate it into bpchar or varchar :-( ... but the\nsyntax problem is solved.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 01:14:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "On Wed, 23 Feb 2000, Tom Lane wrote:\n\n> >> insert OID = 9999 ( bit varying PGUID 1 1 t b t \\054 0 0 bitvaryingin ... )\n\n> The space in the type name is gonna confuse things.\n\n> AFAICS the solution would have to be similar to what we already do for\n> CHARACTER VARYING: parse the type declaration specially in gram.y,\n> and translate it to an internal type name.\n\nThose are only workarounds on the backend level though. Every new hack\nlike this will require fixing every client applicatiion to translate that\ntype right. It's fine with CHARACTER VARYING, because VARCHAR is an\nofficial alias (although it's not the real type name, mind you), but there\nis no VARBIT or NVARCHAR. It seems that allowing something like\n\n\tbit\\ varying\n\nin the bootstrap scanner will solve the problem where it's being caused.\nInternal type names should go away, not accumulate. ;)\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 23 Feb 2000 13:54:42 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "I am still going through the CVS logs, and I can already say that this\nrelease will have more updated items than any previous release. We can\nblame Tom Lane for most of this. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Feb 2000 11:51:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> I am still going through the CVS logs, and I can already say that this\n> release will have more updated items than any previous release. We can\n> blame Tom Lane for most of this. :-)\n\nLet me cast blame on Peter Eisentraut too. ;-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Feb 2000 12:08:14 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> > >> insert OID = 9999 ( bit varying PGUID 1 1 ...\n> > The space in the type name is gonna confuse things.\n> > AFAICS the solution would have to be similar to what we already do for\n> > CHARACTER VARYING: parse the type declaration specially in gram.y,\n> > and translate it to an internal type name.\n> Those are only workarounds on the backend level though. Every new hack\n> like this will require fixing every client applicatiion to translate that\n> type right. It's fine with CHARACTER VARYING, because VARCHAR is an\n> official alias (although it's not the real type name, mind you), but there\n> is no VARBIT or NVARCHAR. It seems that allowing something like\n> bit\\ varying\n> in the bootstrap scanner will solve the problem where it's being caused.\n> Internal type names should go away, not accumulate. ;)\n\nI'm not sure that I agree that multi-word character types are required\ninternally. Somehow that seems to just push the problem of\nSQL92-specific syntax to another part of the code. We could just as\neasily (?) translate *every* \"xxx VARYING\" to \"varxxx\" on input, and\ndo the inverse on output or pg_dump.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 28 Feb 2000 08:51:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "> > in the bootstrap scanner will solve the problem where it's being caused.\n> > Internal type names should go away, not accumulate. ;)\n> \n> I'm not sure that I agree that multi-word character types are required\n> internally. Somehow that seems to just push the problem of\n> SQL92-specific syntax to another part of the code. We could just as\n> easily (?) translate *every* \"xxx VARYING\" to \"varxxx\" on input, and\n> do the inverse on output or pg_dump.\n\nYes, seems we just don't want to do that during beta. I forgot about\nthis item I had promised. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Feb 2000 04:11:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> It seems that allowing something like\n>> bit\\ varying\n>> in the bootstrap scanner will solve the problem where it's being caused.\n>> Internal type names should go away, not accumulate. ;)\n\n> I'm not sure that I agree that multi-word character types are required\n> internally. Somehow that seems to just push the problem of\n> SQL92-specific syntax to another part of the code.\n\nIt doesn't push it anywhere: you still have the problem that the parser\nexpects type names to be single tokens, not multiple tokens, and any\nexceptions need to be special-cased in the grammar. We can handle that\nfor the few multi-word type names decreed by SQL92. But allowing\ninternal type names to be multi-word as well will create more headaches\nin other places (even if it doesn't make the grammar ambiguous, which\nit well might). I think the bootstrap scanner would just be the tip of\nthe iceberg...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Feb 2000 09:46:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Tom Lane writes:\n\n> > I'm not sure that I agree that multi-word character types are required\n> > internally. Somehow that seems to just push the problem of\n> > SQL92-specific syntax to another part of the code.\n> \n> It doesn't push it anywhere: you still have the problem that the parser\n> expects type names to be single tokens, not multiple tokens, and any\n> exceptions need to be special-cased in the grammar. We can handle that\n> for the few multi-word type names decreed by SQL92. But allowing\n> internal type names to be multi-word as well will create more headaches\n> in other places (even if it doesn't make the grammar ambiguous, which\n> it well might). I think the bootstrap scanner would just be the tip of\n> the iceberg...\n\nI don't get that. What's wrong with (conceptually) having a rule like\nthis:\n\nType: TIME { $$ = \"time\"; }\n | REAL { $$ = \"real\"; }\n | CHAR { $$ = \"char\"; }\n | BIT VARYING { $$ = \"bit varying\"; }\n | Id { $$ = $1; } /* potentially user-defined type */\n\nThis is pretty much what it does now, only that the right side of $$ =\n\"...\" never contains a space, which is purely coincidental.\n\nThe list of multi-token SQL types is very finite. Any user-defined\ntypes with spaces would have to use the usual double-quote mechanism. The\nadvantage of the above is that once I have \"bit varying\" in the catalog, I\ndon't have to worry mangling it when I want to get it out.\n\nI don't understand where you get the notion of \"multiworded internal\ntypes\" from. All that would be required is concatenating a set of specific\ntoken combinations to one and you're done. Once that is done, no one\nworries about the fact that there is in fact a space in the type name.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 29 Feb 2000 00:19:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I'm not sure that I agree that multi-word character types are required\n> internally. Somehow that seems to just push the problem of\n> SQL92-specific syntax to another part of the code. We could just as\n> easily (?) translate *every* \"xxx VARYING\" to \"varxxx\" on input, and\n> do the inverse on output or pg_dump.\n\nOn the one hand I propose what seems like editing a handful of lines in\nthe bootstrap scanner (an internal interface) to solve this problem once\nand for all. What you are proposing is that every client interface (libpq,\nSPI, PL du jour, who knows) will have to know a list of the latest hacks\nof type conversions in the backend. And it would be very confusing to\npeople defining user types like \"varxxx\".\n\nYou can define user types with spaces in them (note to self: better check\nthis), so I don't see why we should hack around it. What do you plan on\ndoing with DOUBLE PRECISION and TIME WITH TIMEZONE?\n\nConfused ...\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 29 Feb 2000 00:20:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Thomas Lockhart writes:\n> \n> > I'm not sure that I agree that multi-word character types are required\n> > internally. Somehow that seems to just push the problem of\n> > SQL92-specific syntax to another part of the code. We could just as\n> > easily (?) translate *every* \"xxx VARYING\" to \"varxxx\" on input, and\n> > do the inverse on output or pg_dump.\n> \n> On the one hand I propose what seems like editing a handful of lines in\n> the bootstrap scanner (an internal interface) to solve this problem once\n> and for all. What you are proposing is that every client interface (libpq,\n> SPI, PL du jour, who knows) will have to know a list of the latest hacks\n\nlibpq doesn't know anything about syntax. It is mostly gram.y files. I\nthink ecpg is the only other one that needs the fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 Feb 2000 18:30:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "(sorry for the delay in responding...)\n\n> You can define user types with spaces in them (note to self: better check\n> this), so I don't see why we should hack around it. What do you plan on\n> doing with DOUBLE PRECISION and TIME WITH TIMEZONE?\n\nDOUBLE PRECISION is already supported, and becomes float8. TIME WITH\nTIMEZONE is currently transparently swallowed to become equivalent to\nTIME, for reasons spelled out in the docs. I've toyed with the idea of\nimplementing the SQL92 version of it, but it is *so* useless and brain\ndamaged (cf Date et al) that I (at least so far) cannot bring myself\nto do so. But if and when, it might be ztime internally.\n\nIt is unlikely that we can transparently parse two-word types in\ngram.y without explicit support for it. Just adding IDENT IDENT to\nsimple types leads to a shift/reduce conflict.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 01 Mar 2000 06:50:45 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> It is unlikely that we can transparently parse two-word types in\n> gram.y without explicit support for it. Just adding IDENT IDENT to\n> simple types leads to a shift/reduce conflict.\n\nRight. I think what Peter is actually suggesting is that BIT VARYING\n(which must be special-cased in gram.y) could be equivalent to\n\"bit varying\" (as a quoted identifier, that works already in most\nplaces, and arguably should work everywhere). There's a certain amount\nof intellectual cleanliness in that. OTOH, it's not apparent that it's\nreally any *better* than `varbit' or your choice of other space-free\ninternal names.\n\nIf SQL92 were a moving target then I'd be concerned about having to\ntrack the special cases in a lot of bits of code ... but it's not\na moving target.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Mar 2000 01:53:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "On Wed, 1 Mar 2000, Thomas Lockhart wrote:\n\n> It is unlikely that we can transparently parse two-word types in\n> gram.y without explicit support for it. Just adding IDENT IDENT to\n> simple types leads to a shift/reduce conflict.\n\nI am not saying that we should support two token types in general. Only\nthe SQL types. We already do that anyway, like (kind of)\n\nType: CHARACTER VARYING { $$ = \"varchar\"; }\n | etc.\n\nAll I'm saying is that we add\n\n | BIT VARYING { $$ = \"bit varying\"; }\n\nNo problem so far, right? Especially, if this is dumped out, then it\nbecomes bit varying without any extra effort.\n\nThe only problem is that with the current syntax the bootstrap scanner\ncannot insert fields that contain spaces. Simple fix there, and we're\ndone.\n\nTo be clear again: I am not vaguely suggesting that we support any\nmulti-token types. I am just saying that we shouldn't introduce any new\nand unnecessary external/internal type discrepancies just because the\nbootstrap scanner is stupid.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 1 Mar 2000 15:47:11 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Wed, 1 Mar 2000, Thomas Lockhart wrote:\n\n> TIME WITH\n> TIMEZONE is currently transparently swallowed to become equivalent to\n> TIME, for reasons spelled out in the docs. I've toyed with the idea of\n> implementing the SQL92 version of it, but it is *so* useless and brain\n> damaged (cf Date et al) that I (at least so far) cannot bring myself\n> to do so. But if and when, it might be ztime internally.\n\nI've read the documentation and SQL92 and I can't see anything wrong with\nit. Care to enlighten me?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 1 Mar 2000 15:48:38 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Wed, 1 Mar 2000, Tom Lane wrote:\n\n> Right. I think what Peter is actually suggesting is that BIT VARYING\n> (which must be special-cased in gram.y) could be equivalent to\n> \"bit varying\" (as a quoted identifier, that works already in most\n> places, and arguably should work everywhere). There's a certain amount\n> of intellectual cleanliness in that.\n\n{Grin} That's exactly what I wanted.\n\n> OTOH, it's not apparent that it's really any *better* than `varbit' or\n> your choice of other space-free internal names.\n\nIt's better because then you don't need any special casing when you\nprovide the type back to the client. And it's better because you don't\nneed to remember that \"foo\" is really \"bar\" internally. And it's better\nbecause it wouldn't disallow users from defining \"varbit\" themselves with\nthe non-obvious error message that it already exists. (Okay, the last is a\nweak reason, but it is one.) Finally, it's better because it already\nworks, with only a minor change in the bootstrap scanner necessary.\n\n> If SQL92 were a moving target then I'd be concerned about having to\n> track the special cases in a lot of bits of code ... but it's not\n> a moving target.\n\nBut PostgreSQL is a moving target in all regards. Where would you want to\ndo the endless internal/external type conversions on the way to the\nclient. In pg_dump? In psql? In libpq? In the server communications code?\nMake a view around pg_type? How about nowhere and we just do the above?\n\nSpecial cases suck. ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 1 Mar 2000 16:15:26 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ? "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> All I'm saying is that we add\n\n> | BIT VARYING { $$ = \"bit varying\"; }\n\n> No problem so far, right? Especially, if this is dumped out, then it\n> becomes bit varying without any extra effort.\n\nWell, no, it becomes \"bit varying\", *with* quotes, if the dumper is\nnot broken. (Unless we special-case the dumper to know that this\nparticular typename doesn't need to be quoted despite its embedded\nspace --- but that's what you hoped to avoid.) So there's no automatic\nway of producing a completely SQL-compliant dump for this type name,\nand that removes what would otherwise be (IMHO) the strongest argument\nfor making the internal name be \"bit varying\" and not \"varbit\" or\nwhatever.\n\nA much more significant problem for this particular datatype is that it\nrequires special syntax regardless, namely a length spec like the ones\nfor char and varchar:\n\n <bit string type> ::=\n BIT [ <left paren> <length> <right paren> ]\n | BIT VARYING <left paren> <length> <right paren>\n\nCurrently, char and numeric (the existing types that need length\nspecifications) have to be special-cased everywhere in order to\nparse or append the length info. I foresee the same will be needed\nfor bit and bit varying. If you can find a way to avoid\nthat special-case logic, I'll get a lot more excited about not\nhaving to treat \"bit varying\" as a special-case name.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Mar 2000 10:52:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "> I've read the documentation and SQL92 and I can't see anything wrong with\n> it. Care to enlighten me?\n\nSQL92 \"TIME WITH TIMEZONE\" carries a single numeric timezone with each\ntime field. It has no provision for daylight savings time. And a time\nfield without an associated date has imho no possibility for a\nmeaningful \"timezone\" or a meaningful usage. So the definitions and\nfeatures are completely at odds with typical date and time usage and\nrequirements in many countries around the world.\n\nDate et al discuss this, and have the same opinion, so the gods are\nwith me on this one ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 01 Mar 2000 16:49:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta for 4:30AST ... ?"
},
{
"msg_contents": "On Wed, 1 Mar 2000, Tom Lane wrote:\n\n> Well, no, it becomes \"bit varying\", *with* quotes, if the dumper is\n> not broken.\n\nI know, but consider psql and others just using plain libpq functionality.\n\n> for bit and bit varying. If you can find a way to avoid\n> that special-case logic, I'll get a lot more excited about not\n> having to treat \"bit varying\" as a special-case name.\n\nNOOOOOOOOOOOOOO. I'm not trying to treat \"bit varying\" as a special case\nname. I want to treat it as a normal name. There's absolutely no\ndifference whether the pg_type entry for the type represented by the\ntokens BIT VARYING is \"varbit\", \"bit varying\", or \"foo\". I'm just saying\nthat the second would be more obvious and convenient, but that it would\nrequire a small fix somewhere.\n\nWe're not going to allow any usertype(x) syntax in this life time, are we,\nand the fact remains that we have to parse the reserved-word SQL types\nseparately. But this has all nothing to do with what I'm saying. Why\ndoesn't anyone understand me?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 1 Mar 2000 19:35:44 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> NOOOOOOOOOOOOOO. I'm not trying to treat \"bit varying\" as a special case\n> name. I want to treat it as a normal name. There's absolutely no\n> difference whether the pg_type entry for the type represented by the\n> tokens BIT VARYING is \"varbit\", \"bit varying\", or \"foo\". I'm just saying\n> that the second would be more obvious and convenient, but that it would\n> require a small fix somewhere.\n\nOK, fair enough, but the thing is: is the bootstrap parser the only\nplace that will have to be changed to make this possible? I doubt it.\nThe fix may not be as small as you expect.\n\nThere's another issue, which is that the routines that implement\noperations for a particular type are generally named after the type's\ninternal name. I trust you are not going to propose that we find a way\nto put spaces into C function names ;-). It seems to me that the\nconfusion created by having support code named differently from the\ntype's internal name is just as bad as having the internal name\ndifferent from the external name.\n\nThis being the case, it seems like \"bit_varying\" might be a reasonable\ncompromise for the internal name, and that should work already...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Mar 2000 14:58:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST) "
},
{
"msg_contents": "> There's another issue, which is that the routines that implement\n> operations for a particular type are generally named after the type's\n> internal name. I trust you are not going to propose that we find a way\n> to put spaces into C function names ;-). It seems to me that the\n> confusion created by having support code named differently from the\n> type's internal name is just as bad as having the internal name\n> different from the external name.\n> \n> This being the case, it seems like \"bit_varying\" might be a reasonable\n> compromise for the internal name, and that should work already...\n\nHaving only one type with an underscore seems like a mistake. We already\ndon't have internal names matching. I would just make it bit, bitvar,\nor maybe varbit.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Mar 2000 15:26:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "> ... But this has all nothing to do with what I'm saying. Why\n> doesn't anyone understand me?\n\nUh, could be that we're all a bunch of idiots. Of course, I'd prefer\nsome other explanation... :))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Mar 2000 06:29:10 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "Tom Lane writes:\n\n> There's another issue, which is that the routines that implement\n> operations for a particular type are generally named after the type's\n> internal name. I trust you are not going to propose that we find a way\n> to put spaces into C function names ;-). It seems to me that the\n> confusion created by having support code named differently from the\n> type's internal name is just as bad as having the internal name\n> different from the external name.\n> \n> This being the case, it seems like \"bit_varying\" might be a reasonable\n> compromise for the internal name, and that should work already...\n\nOkay, that's the first reasonable argument I've heard in this thread, and\nI'll buy it. Since correspondence between internal type names and function\nnames *is* achievable without hacks we might as well go for this one.\n\nIn turn I'm thinking that it might be nice to have a backend function like\nformat_type(name[, int4]) that formats an internal type and any size\nmodifier for client consumption, like\n\n\tformat_type('varchar', 8) => \"CHARACTER VARYING(8)\"\n\tformat_type('my type') => \"\\\"my type\\\"\"\n\tformat_type('numeric', {xxx}) => \"NUMERIC(9,2)\"\n\nThat could put an end to keeping track of backend implementation details\nin psql, pg_dump, and friends.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 4 Mar 2000 18:06:37 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST) "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In turn I'm thinking that it might be nice to have a backend function like\n> format_type(name[, int4]) that formats an internal type and any size\n> modifier for client consumption, like\n\n> \tformat_type('varchar', 8) => \"CHARACTER VARYING(8)\"\n> \tformat_type('my type') => \"\\\"my type\\\"\"\n> \tformat_type('numeric', {xxx}) => \"NUMERIC(9,2)\"\n\n> That could put an end to keeping track of backend implementation details\n> in psql, pg_dump, and friends.\n\nSeems like a good idea, though I think it's a bit late in the 7.0 cycle\nfor such a change. Maybe for 7.1?\n\nAlso, I assume you mean that the int4 arg would be the typmod value ---\nyour examples are not right in detail for that interpretation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Mar 2000 01:58:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST) "
},
{
"msg_contents": "> > format_type(name[, int4]) that formats an internal type and any size\n> > modifier for client consumption, like\n> > format_type('varchar', 8) => \"CHARACTER VARYING(8)\"\n> > format_type('my type') => \"\\\"my type\\\"\"\n> > format_type('numeric', {xxx}) => \"NUMERIC(9,2)\"\n\nOoh, that *is* a good idea (though the exact name of the function may\nevolve)! Sorry I missed seeing it in Peter's earlier postings.\n\nFunny how we can go for years banging our heads on an issue and have\nsomething like this (ie a good idea on the subject) pop up out of the\nblue.\n\nPresumably we would include a function taking the conversion the other\ndirection too...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 07 Mar 2000 15:47:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "> Okay, that's the first reasonable argument I've heard in this thread, and\n> I'll buy it. Since correspondence between internal type names and function\n> names *is* achievable without hacks we might as well go for this one.\n> \n> In turn I'm thinking that it might be nice to have a backend function like\n> format_type(name[, int4]) that formats an internal type and any size\n> modifier for client consumption, like\n> \n> \tformat_type('varchar', 8) => \"CHARACTER VARYING(8)\"\n> \tformat_type('my type') => \"\\\"my type\\\"\"\n> \tformat_type('numeric', {xxx}) => \"NUMERIC(9,2)\"\n> \n> That could put an end to keeping track of backend implementation details\n> in psql, pg_dump, and friends.\n\nGreat idea! psql and pg_dump can use it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Mar 2000 18:25:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "> > > format_type(name[, int4]) that formats an internal type and any size\n> > > modifier for client consumption, like\n> > > format_type('varchar', 8) => \"CHARACTER VARYING(8)\"\n> > > format_type('my type') => \"\\\"my type\\\"\"\n> > > format_type('numeric', {xxx}) => \"NUMERIC(9,2)\"\n> \n> Ooh, that *is* a good idea (though the exact name of the function may\n> evolve)! Sorry I missed seeing it in Peter's earlier postings.\n> \n> Funny how we can go for years banging our heads on an issue and have\n> something like this (ie a good idea on the subject) pop up out of the\n> blue.\n> \n> Presumably we would include a function taking the conversion the other\n> direction too...\n\nNot sure it is really needed. We already to the translation in gram.y.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Mar 2000 20:30:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
},
{
"msg_contents": "> > Presumably we would include a function taking the conversion the other\n> > direction too...\n> Not sure it is really needed. We already to the translation in gram.y.\n\nRight. And we should expose that routine as mentioned. Otherwise it is\njust hidden behavior.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Mar 2000 05:45:05 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BIT/BIT VARYING names (was Re: [HACKERS] Beta for 4:30AST)"
}
] |
[
{
"msg_contents": "This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\nBe sure to reply to that address.\n\nHello!\nI've tried to compile PostgreSQL v.6.5.2. for SCO \nOpenServer 5.0.5.\nI used directly instructions from documentation. \nCompilation ended with \n'All of PostgreSQL is successfully made.\n Ready to install.'\n \nBut when I tried to start PostgreSQL server I've \ngot\na message :\n'IpcMemoryCreate: shmget failed \n(Invalid argument) key=5432001, size=1063936,\n permission=600\nFATAL 1: \nShmemCreate: cannot create region\n'\nIn compilation log I saw pattern:\n \n'c++ -I../../backend -I../../include -\nI../../interfaces/libpq -I../../include -\nI../../backend -dy -c pgconnection.cc -o \npgconnection.o\nStarting parse\nEntering state 0\nReading a token: Next token is 332 \n(EXTERN_LANG_STRING)\nReducing via rule 3 (line 314), -> @1\nstate stack now 0\nEntering state 2\nReducing via rule 12 (line 340), -> @2\nstate stack now 0 2\nEntering state 4\nNext token is 332 (EXTERN_LANG_STRING)\nShifting token 332 (EXTERN_LANG_STRING), Entering \nstate 34\nReducing via rule 36 (line 400), \nEXTERN_LANG_STRING -> extern_lang_string\nstate stack now 0 2 4\nEntering state 38...\"\nand so on.\n(...)\nEntering state 1\nReading a token: Now at end of input.\nReducing via rule 2 (line 300), extdefs -> \nprogram\nstate stack now 0\nEntering state 1397\nNow at end of input.\nShifting token 0 ($), Entering state 1398\nNow at end of input.\nAll of PostgreSQL is successfully made. Ready to \ninstall.'\n \nThanks for any help\n \nRegards, Adam Walczykiewicz\n([email protected])\n\nGeocrawler.com - The Knowledge Archive\n",
"msg_date": "Mon, 21 Feb 2000 07:44:33 -0800",
"msg_from": "\"Adam Walczykiewicz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ShmemCreate: cannot create reg"
},
{
"msg_contents": "\"Adam Walczykiewicz\" <[email protected]> writes:\n> 'IpcMemoryCreate: shmget failed \n> (Invalid argument) key=5432001, size=1063936,\n> permission=600\n> FATAL 1: \n> ShmemCreate: cannot create region\n\nYou're probably running into the kernel limit on the maximum size\nof a shared memory region. To get started, try giving -N and -B\nswitches smaller than the default values (-N 8 -B 32 should work\nunless your kernel SHMEMMAX setting is real small).\n\nIn the long run you'll probably want to increase your SHMEMMAX\nsetting so that you can run with more buffers, but I don't know\nwhere to set that on SCO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Feb 2000 11:15:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ShmemCreate: cannot create reg "
}
] |
[
{
"msg_contents": "subscribe\n\n===========================================================\n\nVIRGILIO MAIL - Il tuo indirizzo E-mail gratis e per sempre\nhttp://mail.virgilio.it/\n\n\nVIRGILIO - La guida italiana a Internet\nhttp://www.virgilio.it/\n",
"msg_date": "Mon, 21 Feb 2000 13:17:46 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "subscribe"
}
] |
[
{
"msg_contents": "subscribe\n\n\n\n\n\n\n\nsubscribe",
"msg_date": "Mon, 21 Feb 2000 22:50:55 +0300",
"msg_from": "\"j.j.geel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "The following (rather long) query run fine for some time when executed\nthrough Micro$oft Query via Psqlodbc. We never tried it under psql.\nToday it appeared broken (some trickery on the Excel worksheet seems to\nhave fix that) and for the first time I tried to run it from the command\nline. I never managed, psql always breaks in the same way:\n\ncb=> \\i /home/alessio/check.sql\nSELECT peergroup.id, convid.name, convid.bb_cb, convid.bb_eq,\nconvdaily.date, convdaily.paritygross, convdaily.bidprice,\nconvdaily.askprice, convdaily.stockprice, tradingsignal.job,\ntradingsignal.buysell, tradingsignal.delta, tradingsignal.premium\nFROM convdaily convdaily, convid convid, peergroup peergroup,\ntradingsignal tradingsignal\nWHERE convid.id = peergroup.id AND peergroup.id = tradingsignal.id AND\nconvdaily.id = peergroup.id AND convdaily.date = tradingsignal.date AND\n((tradingsignal.job=6) AND (tradingsignal.buysell='B') AND\n(peergroup.pgid=45) AND (convdaily.date='21-02-2000') OR\n(tradingsignal.job=7) AND\n(tradingsignal.buysell='B') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=8) AND\n(tradingsignal.buysell='B') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=210) AND\n(tradingsignal.buysell='B') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=211) AND\n(tradingsignal.buysell='B') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=6) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=7) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=8) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=210) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=211) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=212) AND\n(tradingsignal.buysell='B') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000') OR (tradingsignal.job=212) AND\n(tradingsignal.buysell='U') AND (peergroup.pgid=45) AND\n(convdaily.date='21-02-2000'))\nORDER BY peergroup.id, tradingsignal.job;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nSome info on the tables:\n\ncb=> select count(*) from convdaily;\n count\n-------\n2260691\n(1 row)\n\ncb=> select count(*) from convid;\ncount\n-----\n 3666\n(1 row)\n\ncb=> select count(*) from peergroup;\ncount\n-----\n 730\n(1 row)\n\ncb=> select count(*) from tradingsignal;\n count\n------\n221374\n(1 row)\n\nIs it a problem on the backend or on psql? Is simply the query using a\ntoo big 4-table join? I am wondering, since postmaster.log states \n\nFATAL 1: Memory exhausted in AllocSetAlloc()\n\nWe are running PostgreSQL 6.5.2 on alphaev6-dec-osf4.0f, compiled by cc.\n\nAny idea would be greatly appreciated.\nThanks in advance.\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://www.sevenseas.org/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 22 Feb 2000 13:26:59 +0200",
"msg_from": "Alessio Bragadini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Big join breaks psql"
},
{
"msg_contents": "Alessio Bragadini <[email protected]> writes:\n> The following (rather long) query run fine for some time when executed\n> through Micro$oft Query via Psqlodbc. We never tried it under psql.\n> Today it appeared broken (some trickery on the Excel worksheet seems to\n> have fix that) and for the first time I tried to run it from the command\n> line. I never managed, psql always breaks in the same way:\n\nTry doing\n\tset ksqo = 'on';\nin psql before you run the query. I think the ODBC driver does that for\nyou automatically.\n\nThe regular 6.5 optimizer tends to blow up when faced with large\nOR-of-ANDs WHERE clauses. 7.0 will be a lot better about it...\nbut in the meantime you need the KSQO hack.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 11:20:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Big join breaks psql "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Try doing\n> set ksqo = 'on';\n> in psql before you run the query. I think the ODBC driver does that for\n> you automatically.\n\nThanks for the suggestion, unfortunately I get the same behaviour.\n\n-- \nAlessio F. Bragadini\t\[email protected]\nAPL Financial Services\t\thttp://www.sevenseas.org/~alessio\nNicosia, Cyprus\t\t \tphone: +357-2-750652\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Wed, 23 Feb 2000 12:43:49 +0200",
"msg_from": "Alessio Bragadini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Big join breaks psql"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe transactions should be the way to distinguish a relational database\nfrom others no-relational databases, (MySQL is the right example).\nWe are very proud of PostgreSQL transactions but seems that it doesn't\nwork in the right way.\nIt shoud be important to be sure that PostgreSQL is compliant with\nSQL92.\nI need absolutely to use transactions but until now I could not use it,\nin my case it is completely unusable.\nI tried transactions in other databases and I compared it with\nPostgreSQL and no one of which I tried has the same PostgreSQL behavior.\n\nI tried the following script:\n-------------------------------------------------------\nPostgreSQL:\n-------------------------------------------------------\nbegin transaction;\ncreate table tmp(a int);\ninsert into tmp values (1);\ninsert into tmp values (1000000000000000000000000000000000);\nERROR: pg_atoi: error reading \"1000000000000000000000000000000000\":\nNumerical result out of range\ncommit;\nselect * from tmp;\nERROR: tmp: Table does not exist.\n-------------------------------------------------------\nInterbase, Oracle,Informix,Solid,Ms-Access,DB2:\n-------------------------------------------------------\nconnect hygea.gdb;\ncreate table temp(a int);\ninsert into temp values (1);\ninsert into temp values (1000000000000000000000000000000000);\ncommit;\nselect * from temp;\n\narithmetic exception, numeric overflow, or string truncation\n\n A\n===========\n 1\n\nI would like to know what the Standard says and who is in the rigth path\nPostgreSQL or the others, considering the two examples reported below.\n\nComments?\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Tue, 22 Feb 2000 12:47:44 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "TRANSACTIONS"
},
{
"msg_contents": "\nOn 22-Feb-2000 Jose Soares wrote:\n> begin transaction;\n> create table tmp(a int);\n> insert into tmp values (1);\n> insert into tmp values (1000000000000000000000000000000000);\n> ERROR: pg_atoi: error reading \"1000000000000000000000000000000000\":\n> Numerical result out of range\n> commit;\n> select * from tmp;\n> ERROR: tmp: Table does not exist.\n> -------------------------------------------------------\n> Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n ^^^^^^^^^ \n AFAIK, MS Access have no transactions inside it,\n Informix (at least old versions I worked with) always \n perform create,drop, alter object outside transaction \n but IMHO it's not right behavior.\n \n I believe postgres's behavior more meaningful, \nbut IMHO, this example is quite far from real life. \n\n \n\n-- \nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Tue, 22 Feb 2000 15:13:12 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Jose Soares <[email protected]> writes:\n> -------------------------------------------------------\n> Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> -------------------------------------------------------\n> connect hygea.gdb;\n> create table temp(a int);\n> insert into temp values (1);\n> insert into temp values (1000000000000000000000000000000000);\n> commit;\n> select * from temp;\n\n> arithmetic exception, numeric overflow, or string truncation\n\n> A\n> ===========\n> 1\n\n> I would like to know what the Standard says and who is in the rigth path\n> PostgreSQL or the others, considering the two examples reported below.\n\nI think those other guys are unquestionably failing to conform to SQL92.\n6.10 general rule 3.a says\n\n a) If SD is exact numeric or approximate numeric, then\n\n Case:\n\n i) If there is a representation of SV in the data type TD\n that does not lose any leading significant digits after\n rounding or truncating if necessary, then TV is that rep-\n resentation. The choice of whether to round or truncate is\n implementation-defined.\n\n ii) Otherwise, an exception condition is raised: data exception-\n numeric value out of range.\n\nand 3.3.4.1 says\n\n The phrase \"an exception condition is raised:\", followed by the\n name of a condition, is used in General Rules and elsewhere to\n indicate that the execution of a statement is unsuccessful, ap-\n plication of General Rules, other than those of Subclause 12.3,\n \"<procedure>\", and Subclause 20.1, \"<direct SQL statement>\", may\n be terminated, diagnostic information is to be made available,\n and execution of the statement is to have no effect on SQL-data or\n schemas. The effect on <target specification>s and SQL descriptor\n areas of an SQL-statement that terminates with an exception condi-\n tion, unless explicitly defined by this International Standard, is\n implementation-dependent.\n\nI see no way that allowing the transaction to commit after an overflow\ncan be called consistent with the spec.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 11:32:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "At 11:32 AM 2/22/00 -0500, Tom Lane wrote:\n\n>I see no way that allowing the transaction to commit after an overflow\n>can be called consistent with the spec.\n\nYou are absolutely right. The whole point is that either a) everything\ncommits or b) nothing commits.\n\nHaving some kinds of exceptions allow a partial commit while other\nexceptions rollback the transaction seems like a very error-prone\nprogramming environment to me.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 10:47:16 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "At 12:47 PM 22-02-2000 +0100, Jose Soares wrote:\n>begin transaction;\n>create table tmp(a int);\n>insert into tmp values (1);\n>insert into tmp values (1000000000000000000000000000000000);\n>ERROR: pg_atoi: error reading \"1000000000000000000000000000000000\":\n>Numerical result out of range\n>commit;\n>select * from tmp;\n>ERROR: tmp: Table does not exist.\n>-------------------------------------------------------\n>Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n>-------------------------------------------------------\n>connect hygea.gdb;\n>create table temp(a int);\n>insert into temp values (1);\n>insert into temp values (1000000000000000000000000000000000);\n>commit;\n>select * from temp;\n>\n>arithmetic exception, numeric overflow, or string truncation\n>\n> A\n>===========\n> 1\n\nStuff done in a transaction cannot be committed if there is an error. So\nlooks like Postgres is right and the rest are wrong ;).\n\nAlso I believe Oracle does a commit behind your back whenever you do a\ncreate table or stuff like that. \n\nHowever I did have problems rolling back a create table in Postgres before-\nafter rolling back I could not recreate a table of the same name. I had to\nmanually unlink the table at filesystem level. Not sure if that has been\nfixed.\n\nOn a different note I wonder if there could be layers of transactions\n(without having to create two separate connections).. \n\nBegin transaction A\nTry to do transaction B\nDepending on whether B succeeds or fails we do the following stuff differently\nblahblahblah\nIf blahblablah fails then rollback the whole thingy, including nested\ntransaction B (even if \"committed\")\ncommit transaction A\n\nSounds like a headache to implement tho (performance hits etc), and\nprobably more an academic feature than anything. So I'm just wondering just\nfor the sake of wondering ;). If we go that way lots of people will have a\nnew toy to play with (to sell as well) and things will get even more\ncomplex.. <grin>.\n\nCheerio,\n\nLink.\n\n",
"msg_date": "Wed, 23 Feb 2000 11:50:26 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] TRANSACTIONS"
},
{
"msg_contents": "\n\nDmitry Samersoff wrote:\n\n> On 22-Feb-2000 Jose Soares wrote:\n> > begin transaction;\n> > create table tmp(a int);\n> > insert into tmp values (1);\n> > insert into tmp values (1000000000000000000000000000000000);\n> > ERROR: pg_atoi: error reading \"1000000000000000000000000000000000\":\n> > Numerical result out of range\n> > commit;\n> > select * from tmp;\n> > ERROR: tmp: Table does not exist.\n> > -------------------------------------------------------\n> > Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> ^^^^^^^^^\n> AFAIK, MS Access have no transactions inside it,\n> Informix (at least old versions I worked with) always\n> perform create,drop, alter object outside transaction\n> but IMHO it's not right behavior.\n\nI don't know and I don't care about old software,\nI'm talking about Ms_Access97 and Informix 8.\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Wed, 23 Feb 2000 14:22:04 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Sorry for my english, Tom, but the point is another, I'm talking about\ntransactions not about error messages.\nThis is only a stupid example how to abort a transaction, PostgreSQL aborts\nautomatically transactions if\nan error occurs, even an warning or a syntax error.\nI can believe that all other databases are wrong and only we (PostgreSQL) are\nright, but please try to understand me. This is not easy to believe anyway.\nI'm looking for another database with a behavior like PostgreSQL but I can't find\nit, and I tried a lot of them until now.\nDo you know some database with transactions like PostgreSQL?\n\n\nTom Lane wrote:\n\n> Jose Soares <[email protected]> writes:\n> > -------------------------------------------------------\n> > Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> > -------------------------------------------------------\n> > connect hygea.gdb;\n> > create table temp(a int);\n> > insert into temp values (1);\n> > insert into temp values (1000000000000000000000000000000000);\n> > commit;\n> > select * from temp;\n>\n> > arithmetic exception, numeric overflow, or string truncation\n>\n> > A\n> > ===========\n> > 1\n>\n> > I would like to know what the Standard says and who is in the rigth path\n> > PostgreSQL or the others, considering the two examples reported below.\n>\n> I think those other guys are unquestionably failing to conform to SQL92.\n> 6.10 general rule 3.a says\n>\n> a) If SD is exact numeric or approximate numeric, then\n>\n> Case:\n>\n> i) If there is a representation of SV in the data type TD\n> that does not lose any leading significant digits after\n> rounding or truncating if necessary, then TV is that rep-\n> resentation. The choice of whether to round or truncate is\n> implementation-defined.\n>\n> ii) Otherwise, an exception condition is raised: data exception-\n> numeric value out of range.\n>\n> and 3.3.4.1 says\n>\n> The phrase \"an exception condition is raised:\", followed by the\n> name of a condition, is used in General Rules and elsewhere to\n> indicate that the execution of a statement is unsuccessful, ap-\n> plication of General Rules, other than those of Subclause 12.3,\n> \"<procedure>\", and Subclause 20.1, \"<direct SQL statement>\", may\n> be terminated, diagnostic information is to be made available,\n> and execution of the statement is to have no effect on SQL-data or\n> schemas. The effect on <target specification>s and SQL descriptor\n> areas of an SQL-statement that terminates with an exception condi-\n> tion, unless explicitly defined by this International Standard, is\n> implementation-dependent.\n>\n> I see no way that allowing the transaction to commit after an overflow\n> can be called consistent with the spec.\n>\n> regards, tom lane\n>\n> ************\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Wed, 23 Feb 2000 14:40:53 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Don Baccus wrote:\n\n> At 11:32 AM 2/22/00 -0500, Tom Lane wrote:\n>\n> >I see no way that allowing the transaction to commit after an overflow\n> >can be called consistent with the spec.\n>\n> You are absolutely right. The whole point is that either a) everything\n> commits or b) nothing commits.\n>\n> Having some kinds of exceptions allow a partial commit while other\n> exceptions rollback the transaction seems like a very error-prone\n> programming environment to me.\n>\n\nIt is hard to believe all world is wrong and only we are right. Isn't it ?\n;)\n\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n>\n> ************\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Wed, 23 Feb 2000 14:46:54 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n>At 11:32 AM 2/22/00 -0500, Tom Lane wrote:\n>\n>>I see no way that allowing the transaction to commit after an overflow\n>>can be called consistent with the spec.\n>\n>You are absolutely right. The whole point is that either a) everything\n>commits or b) nothing commits.\n>\n>Having some kinds of exceptions allow a partial commit while other\n>exceptions rollback the transaction seems like a very error-prone\n>programming environment to me.\n\nI'm not sure what Date says about this, but reading the spec I see\nwhere the other way of looking at the commit is... I'm sure I\nmissed something, but here's the relevant parts from a draft that I see:\n\n4.10.1 Checking of constraints\n When a constraint is checked other than at the end of an SQL-\n transaction, if it is not satisfied, then an exception condition\n is raised and the SQL-statement that caused the constraint to be\n checked has no effect other than entering the exception information\n into the diagnostics area. When a <commit statement> is executed,\n all constraints are effectively checked and, if any constraint\n is not satisfied, then an exception condition is raised and the\n transaction is terminated by an implicit <rollback statement>.\n\n4.28 SQL Transactions\n\tAn SQL-transaction\n is terminated by a <commit statement> or a <rollback statement>.\n If an SQL-transaction is terminated by successful execution of a\n <commit statement>, then all changes made to SQL-data or schemas by\n that SQL-transaction are made persistent and accessible to all con-\n current and subsequent SQL-transactions. If an SQL-transaction is\n terminated by a <rollback statement> or unsuccessful execution of\n a <commit statement>, then all changes made to SQL-data or schemas\n by that SQL-transaction are canceled. Committed changes cannot be\n canceled. If execution of a <commit statement> is attempted, but\n certain exception conditions are raised, it is unknown whether or\n not the changes made to SQL-data or schemas by that SQL-transaction\n are canceled or made persistent.\n\n10.6 <constraint name definition> and <constraint attributes>\n 4) When a constraint is effectively checked, if the constraint is\n not satisfied, then an exception condition is raised: integrity\n constraint violation. If this exception condition is raised as a\n result of executing a <commit statement>, then SQLSTATE is not\n set to integrity constraint violation, but is set to transaction\n rollback-integrity constraint violation (see the General Rules\n of Subclause 14.3, \"<commit statement>\").\n\n14.3 <commit statement>\n 5) Case:\n\n a) If any constraint is not satisfied, then any changes to SQL-\n data or schemas that were made by the current SQL-transaction\n are canceled and an exception condition is raised: transac-\n tion rollback-integrity constraint violation.\n\n b) If any other error preventing commitment of the SQL-\n transaction has occurred, then any changes to SQL-data or\n schemas that were made by the current SQL-transaction are\n canceled and an exception condition is raised: transaction\n rollback with an implementation-defined subclass value.\n\n c) Otherwise, any changes to SQL-data or schemas that were made\n by the current SQL-transaction are made accessible to all\n concurrent and subsequent SQL-transactions.\n\n--->\n Although I think that the current postgresql behavior is *better* than\nthe behavior as shown by the other databases, I think a case could be\nmade that 14.3 General Rule 5.a refers only to exceptions thrown by the\ncommit statement itself (any constraints that are checked at that time)\ngiven the section of 4.10.1 and 10.6. This wouldn't be inconsistant\nby type of exception, but would mean that immediate constraints and\ndeferred ones play by different rules for determining how a commit \nworks.\n\n I'm not entirely sure I like that behavior though. It makes the\ndatabase less responsible for being in a reasonable state. For example,\nif you've got a parent and two children, but one of the children fails\ndue to say an overflow exception, you really want to roll it all back,\nbut the database won't do that unless the overflow is checked\nat commit time (ugh!?!).\n\nStephan\n",
"msg_date": "Wed, 23 Feb 2000 09:32:10 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "\n>Sorry for my english, Tom, but the point is another, I'm talking\n>about transactions not about error messages.\n>\n>This is only a stupid example how to abort a transaction, PostgreSQL\n>aborts automatically transactions if an error occurs, even an warning\n>or a syntax error.\n>\n>I can believe that all other databases are wrong and only we\n>(PostgreSQL) are right, but please try to understand me. This is not\n>easy to believe anyway.\n>\n>I'm looking for another database with a behavior like PostgreSQL but\n>I can't find it, and I tried a lot of them until now.\n>\n>Do you know some database with transactions like PostgreSQL?\n\nI personally don't feel qualified to interpret the standard. But I\nwould like to pipe in a little on the issue of what is desirable.\n\nBy default, as a developer, I would be quite unhappy with the behavior\nof those other databases (allowing a commit after an insert has\nfailed). If I do a bulk copy into an existing database, and one copy\nfails, that sort of behavior could concievably render my database\nunusable with not possibility of recovery. So in that sense, from the\npoint of view of desirability I think postgres got it right.\n\nBut then I thought about if from a programming language point of\nview. Consider the following code (I use perl/DBI as an example).\n\n========================= example =========================\n\n$dbh->{AutoCommit} = 0;\n$dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\nwhile (<>){\n if (/([0-9]+) ([0-9]+)/) {\n\t$rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n\tif ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n }\n}\n$dbh->commit;\n$dbh->disconnect;\n\n========================= end ============================\n\nThis incorporates a very common idiom within a transaction block. Of\ncourse, this fails. As far as I can tell from the preceding\ndiscussion, there is no way to \"sanitize\" the transaction once you\nhave fixed the error. IMHO, it would be EXTREMELY useful to be able to\nimplement the above transaction. But not by default.\n\nI'm not sure what a resonable syntax would be - several come to mind.\nYou could have \"SANITIZE TRANSACTION\" or \"\\unset warning\", whatever,\nthe exact syntax matters little to me. But without this sort of\ncapability, people who do programatic error checking and correction\n(which seems like a good thing) are essentially penalized because they\ncannot effectively use transactions.\n\nApologies if it is already possible to do this.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Wed, 23 Feb 2000 10:16:06 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Karl DeBisschop wrote:\n> \n> >Sorry for my english, Tom, but the point is another, I'm talking\n> >about transactions not about error messages.\n> >\n> >This is only a stupid example how to abort a transaction, PostgreSQL\n> >aborts automatically transactions if an error occurs, even an warning\n> >or a syntax error.\n> >\n> >I can believe that all other databases are wrong and only we\n> >(PostgreSQL) are right, but please try to understand me. This is not\n> >easy to believe anyway.\n> >\n> >I'm looking for another database with a behavior like PostgreSQL but\n> >I can't find it, and I tried a lot of them until now.\n> >\n> >Do you know some database with transactions like PostgreSQL?\n> \n> I personally don't feel qualified to interpret the standard. But I\n> would like to pipe in a little on the issue of what is desirable.\n> \n> By default, as a developer, I would be quite unhappy with the behavior\n> of those other databases (allowing a commit after an insert has\n> failed). If I do a bulk copy into an existing database, and one copy\n> fails, that sort of behavior could concievably render my database\n> unusable with not possibility of recovery. So in that sense, from the\n> point of view of desirability I think postgres got it right.\n> \n> But then I thought about if from a programming language point of\n> view. Consider the following code (I use perl/DBI as an example).\n> \n> ========================= example =========================\n> \n> $dbh->{AutoCommit} = 0;\n> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> while (<>){\n> if (/([0-9]+) ([0-9]+)/) {\n> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> }\n> }\n> $dbh->commit;\n> $dbh->disconnect;\n> \n> ========================= end ============================\n> \n> This incorporates a very common idiom within a transaction block. Of\n> course, this fails. As far as I can tell from the preceding\n> discussion, there is no way to \"sanitize\" the transaction once you\n> have fixed the error. IMHO, it would be EXTREMELY useful to be able to\n> implement the above transaction. But not by default.\n> \n> I'm not sure what a resonable syntax would be - several come to mind.\n> You could have \"SANITIZE TRANSACTION\" or \"\\unset warning\", whatever,\n> the exact syntax matters little to me. But without this sort of\n> capability, people who do programatic error checking and correction\n> (which seems like a good thing) are essentially penalized because they\n> cannot effectively use transactions.\n>\nTo continue with your example, this should work:\n\n> $dbh->{AutoCommit} = 0;\n> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> while (<>){\n> if (/([0-9]+) ([0-9]+)/) {\n> eval{$rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\")};\n> if ($@) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> }\n> }\n> $dbh->commit;\n> $dbh->disconnect;\n\nSadly, it does not, as far as I can tell. In fact, it seems to corrupt\nthe database to where you can't create the table tmp anymore, on my\nsystem. I certainly never get a table.\n\nWhat's the rationale behind having the database blow out eval's error\ntrapping? Can't see where letting a program recover from an error in a\nstatement compromises atomicity.\n \n> Apologies if it is already possible to do this.\n> \n\nLikewise.\n",
"msg_date": "Thu, 24 Feb 2000 12:16:04 -0600",
"msg_from": "\"Keith G. Murphy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\nTo summarize, I stated that the following does not work with\npostgresql:\n\n> $dbh->{AutoCommit} = 0;\n> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> while (<>){\n> if (/([0-9]+) ([0-9]+)/) {\n> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> }\n> }\n> $dbh->commit;\n> $dbh->disconnect;\n\nI further said that regardless of what the SQL standard gurus decide,\nI felt that postgresql currently gives desirable behavior - once a\ntransaction is started, it's either all or nothing. But then I\nqualified that by saying I'd like somehow to be able to \"sanitize\" the\ntransaction so that the common idiom above could be made to work.\n\n>From my examination, the difference between our two examples is\n\nOriginal:\nKD> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n\nModified:\nKM> eval{$rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\")};\n\n>From the point of view if the DBMS, i believe these are identical - in\nboth cases the query is issued to the DMBS and the overall transaction\nbecomes \"contaminated\". And as I said before, this is exactly what\nI'd like to have happen in the default case.\n\nIt's not that eval's error trapping is blown out - it's that the\ntransaction defined by the AutoCommit cannot complete because a part\nof it cannot complete -- that's what atomicity means.\n\nAt least that's the way it looks to me. But as I started out saying,\nI don't feel qualified to interpret the standard - I might be wrong,\nplain and simple.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Thu, 24 Feb 2000 14:16:05 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "At 02:16 PM 24-02-2000 -0500, Karl DeBisschop wrote:\n>\n>To summarize, I stated that the following does not work with\n>postgresql:\n>\n>> $dbh->{AutoCommit} = 0;\n>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n>> $dbh->commit;\n>> $dbh->disconnect;\n>\n>It's not that eval's error trapping is blown out - it's that the\n>transaction defined by the AutoCommit cannot complete because a part\n>of it cannot complete -- that's what atomicity means.\n\nMaybe I don't understand the situation. But it doesn't seem to be a big\nproblem.\n\nWith postgres you have ensure that your application filters the data\nproperly before sticking it into the database. Then if the insert fails,\nit's probably a serious database problem and in that case it's best that\nthe whole transaction is aborted anyway.\n\nIt indeed is a problem if the database engine is expected to parse the\ndata. For example - if you send in a date value, and the database engine\nchokes on it. With the nonpostgresql behaviour you can still insert a NULL\ninstead for \"Bad date/ Unknown date\".\n\nBut from the security point of view it is best to reduce the amount of\nparsing done by the database engine. Make sure the app sanitises and\nmassages everything so that the database has no problems with the data. It\ncan be a pain sometimes to figure out what the database can take (which is\nwhy I've been asking for the limits for Postgresql fields and such- so the\napp can keep everything within bounds or grumble to the user/vandal). Once\neverything is set up nicely, if the database grumbles then the app screwed\nup somehow (the vandal got through) and it's best to rollback everything\n(we're lucky if the database just grumbled).\n\nCheerio,\n\nLink.\n\n",
"msg_date": "Fri, 25 Feb 2000 14:41:32 +0800",
"msg_from": "Lincoln Yeoh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n>>To summarize, I stated that the following does not work with\n>>postgresql:\n>>\n>>> $dbh->{AutoCommit} = 0;\n>>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n>>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n>>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n>>> $dbh->commit;\n>>> $dbh->disconnect;\n>>\n>>It's not that eval's error trapping is blown out - it's that the\n>>transaction defined by the AutoCommit cannot complete because a part\n>>of it cannot complete -- that's what atomicity means.\n>\n>Maybe I don't understand the situation. But it doesn't seem to be a big\n>problem.\n>\n>With postgres you have ensure that your application filters the data\n>properly before sticking it into the database. Then if the insert fails,\n>it's probably a serious database problem and in that case it's best that\n>the whole transaction is aborted anyway.\n\nThis reason this idiom is used has nothing to do with validation. I\nagree that the application has the resopnsibility to cehck for valid\ndata.\n\nThe usefulness of the idion is that in a mutli-user environment, this\nis a basic way to update data that may or may not already have a key\nin the table. You can't do a \"SELECT COUNT\" because in the time\nbetween when you SELECT and INSERT (assuming the key is not already\nthere) someone may have done a separate insert. The only other way I\nknow to do this is to lock the entire table against INSERTs which has\nobvious performance effects.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 25 Feb 2000 14:26:48 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Karl DeBisschop wrote:\n> \n> To summarize, I stated that the following does not work with\n> postgresql:\n> \n> > $dbh->{AutoCommit} = 0;\n> > $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> > while (<>){\n> > if (/([0-9]+) ([0-9]+)/) {\n> > $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> > }\n> > }\n> > $dbh->commit;\n> > $dbh->disconnect;\n> \n> I further said that regardless of what the SQL standard gurus decide,\n> I felt that postgresql currently gives desirable behavior - once a\n> transaction is started, it's either all or nothing. But then I\n> qualified that by saying I'd like somehow to be able to \"sanitize\" the\n> transaction so that the common idiom above could be made to work.\n> \n> >From my examination, the difference between our two examples is\n> \n> Original:\n> KD> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> \n> Modified:\n> KM> eval{$rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\")};\n> \n> >From the point of view if the DBMS, i believe these are identical - in\n> both cases the query is issued to the DMBS and the overall transaction\n> becomes \"contaminated\". And as I said before, this is exactly what\n> I'd like to have happen in the default case.\n> \n> It's not that eval's error trapping is blown out - it's that the\n> transaction defined by the AutoCommit cannot complete because a part\n> of it cannot complete -- that's what atomicity means.\n\nI don't have the SQL92 standard with me, so I can't speak to how it\ndefines atomicity. Seems to me it's a means to an end, though, the end\nbeing that all of the statements in the sequence are performed, or\nnone. But if the program traps an error, then does something to\nrecover, you could argue that it's changed the sequence.\n\nAs long as the program has to explicitly Commit, why not? It seems\ndesirable to me that if one statement causes an error, it doesn't affect\nthe database, and the error is returned to the client. If the client\nhas RaiseError on, which he should, and doesn't do anything to\nexplicitly trap, it's going to blow out the program and thus the\ntransaction should be rolled back, which is a good thing. But if he\ndoes explicitly trap, as I do above, why not let him stay within the\ntransaction, since the statement in error has not done anything?\n\nI agree that do get Postgresql to do this might be a lot to expect\n(nested transactions are required, I guess). I'm just not sure that\nit's a *wrong*, or non-conformant, thing to expect.\n\n(By the way, I know VB/Access does it this way. My production code,\nhowever, never takes advantage of this, to my knowledge.)\n\nAddressing Lincoln Yeoh's point in another post, to take the approach\nthat all your data should conform to all database requirements before\nyou enter a transaction seems to me to lead to redundancy: the program\ncode checks and the database checks. Should you have to synchronize all\nrelevant code every time a field requirement is changed? \n\nI agree that to simply continue without error and let the program\nblindly commit, which some folks claim other databases do, is wrong and\nscrews atomicity.\n\nWhat is also wrong is to allow you to do a Commit when the database is\nin an error state, so that you have (in this case) a table in limbo that\ncan't be created or seen, behavior that Jose Soares and I both saw with\nPostgresql (6.5.1 in my case). Why shouldn't Postgresql just implicitly\nRollback at this point, since you can't do anything (constructive) to\nthe database within the transaction anyway?\n",
"msg_date": "Fri, 25 Feb 2000 14:18:24 -0600",
"msg_from": "\"Keith G. Murphy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n\nOn Fri, 25 Feb 2000, Karl DeBisschop wrote:\n\n> \n> >>To summarize, I stated that the following does not work with\n> >>postgresql:\n> >>\n> >>> $dbh->{AutoCommit} = 0;\n> >>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> >>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> >>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> >>> $dbh->commit;\n> >>> $dbh->disconnect;\n> >>\n> \n> The usefulness of the idion is that in a mutli-user environment, this\n> is a basic way to update data that may or may not already have a key\n> in the table. You can't do a \"SELECT COUNT\" because in the time\n> between when you SELECT and INSERT (assuming the key is not already\n> there) someone may have done a separate insert. The only other way I\n> know to do this is to lock the entire table against INSERTs which has\n> obvious performance effects.\nsounds right, but ;-) why you use the transaction in the first place? \n\n",
"msg_date": "Fri, 25 Feb 2000 14:49:19 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n> From: <[email protected]>\n> On Fri, 25 Feb 2000, Karl DeBisschop wrote:\n>\n> > \n> > >>To summarize, I stated that the following does not work with\n> > >>postgresql:\n> > >>\n> > >>> $dbh->{AutoCommit} = 0;\n> > >>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> > >>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > >>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> > >>> $dbh->commit;\n> > >>> $dbh->disconnect;\n> > >>\n> > \n> > The usefulness of the idion is that in a mutli-user environment, this\n> > is a basic way to update data that may or may not already have a key\n> > in the table. You can't do a \"SELECT COUNT\" because in the time\n> > between when you SELECT and INSERT (assuming the key is not already\n> > there) someone may have done a separate insert. The only other way I\n> > know to do this is to lock the entire table against INSERTs which has\n> > obvious performance effects.\n\n> sounds right, but ;-) why you use the transaction in the first place? \n\nRememeber that this is just an example to illustrate what sort of\nbehaviour one user would find useful in tranasctions, so it is a\nlittle simplistic. Not overly simplistic, though, I think.\n\nI'd want a transaction because I'm doing a bulk insert into this live\ndatabase - say syncing in a bunch of data from a slave server while\nthe master is still running. If one (or more) insert(s) fail, I want\nto revert back to the starting pint so I can fix the cause of the\nfailed insert and try again with the database in a known state.\n(there may, for instance, be relationships beteewn the b field such\nthat if only part of the bulk insert suceeds, the database is rendered\ncorrupt).\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 25 Feb 2000 16:58:42 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n> From: \"Keith G. Murphy\" <[email protected]>\n>\n> Karl DeBisschop wrote:\n> > \n> > To summarize, I stated that the following does not work with\n> > postgresql:\n> > \n> > > $dbh->{AutoCommit} = 0;\n> > > $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> > > while (<>){\n> > > if (/([0-9]+) ([0-9]+)/) {\n> > > $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > > if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> > > }\n> > > }\n> > > $dbh->commit;\n> > > $dbh->disconnect;\n> > \n> > I further said that regardless of what the SQL standard gurus decide,\n> > I felt that postgresql currently gives desirable behavior - once a\n> > transaction is started, it's either all or nothing. But then I\n> > qualified that by saying I'd like somehow to be able to \"sanitize\" the\n> > transaction so that the common idiom above could be made to work.\n> > \n> > >From my examination, the difference between our two examples is\n> > \n> > Original:\n> > KD> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > \n> > Modified:\n> > KM> eval{$rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\")};\n> > \n> > >From the point of view if the DBMS, i believe these are identical - in\n> > both cases the query is issued to the DMBS and the overall transaction\n> > becomes \"contaminated\". And as I said before, this is exactly what\n> > I'd like to have happen in the default case.\n> > \n> > It's not that eval's error trapping is blown out - it's that the\n> > transaction defined by the AutoCommit cannot complete because a part\n> > of it cannot complete -- that's what atomicity means.\n>\n> I don't have the SQL92 standard with me, so I can't speak to how it\n> defines atomicity. Seems to me it's a means to an end, though, the end\n> being that all of the statements in the sequence are performed, or\n> none. But if the program traps an error, then does something to\n> recover, you could argue that it's changed the sequence.\n\nI agree\n\n> As long as the program has to explicitly Commit, why not? It seems\n> desirable to me that if one statement causes an error, it doesn't affect\n> the database, and the error is returned to the client. If the client\n> has RaiseError on, which he should, and doesn't do anything to\n> explicitly trap, it's going to blow out the program and thus the\n> transaction should be rolled back, which is a good thing. But if he\n> does explicitly trap, as I do above, why not let him stay within the\n> transaction, since the statement in error has not done anything?\n\nIt is not sufficient that the statement in error has done nothing -\nthe postmaster in the general case cannot know what relationships\nshould exist between the non-key data. It is quite possible that not\nhaving a record inserted could make the database fundamentally\nunusable. Of course, in my original example and in yours, error is\ntrapped and the situation is (hopefully) fixed by the subsequent\nupdate. Thus, in my post I suggested that postgres could provide some\nsort of mechanism to explicitly 'sanitize' the transaction and allow\nit to commit.\n\nIn otherwords, I think we are basically proposing the same thing.\n\n> I agree that do get Postgresql to do this might be a lot to expect\n> (nested transactions are required, I guess). I'm just not sure that\n> it's a *wrong*, or non-conformant, thing to expect.\n>\n> (By the way, I know VB/Access does it this way. My production code,\n> however, never takes advantage of this, to my knowledge.)\n\n>From what I gather, extending postgresql this way is planned anyway -\nit may not happen tomorrow, but notheing in here seems like a very new\nconcept to the development team.\n\n> Addressing Lincoln Yeoh's point in another post, to take the approach\n> that all your data should conform to all database requirements before\n> you enter a transaction seems to me to lead to redundancy: the program\n> code checks and the database checks. Should you have to synchronize all\n> relevant code every time a field requirement is changed? \n>\n> I agree that to simply continue without error and let the program\n> blindly commit, which some folks claim other databases do, is wrong and\n> screws atomicity.\n>\n> What is also wrong is to allow you to do a Commit when the database is\n> in an error state, so that you have (in this case) a table in limbo that\n> can't be created or seen, behavior that Jose Soares and I both saw with\n> Postgresql (6.5.1 in my case). Why shouldn't Postgresql just implicitly\n> Rollback at this point, since you can't do anything (constructive) to\n> the database within the transaction anyway?\n\nYes, the table in limbo is certainly a problem/bug. But even this\nbug, in my estimation, is better than allowing a transaction with an\nerror in it to commit without explicitly clearing the error status.\nThe bug is a pain in the neck, but it apparently has been fixed in\n6.5.3 -- so why not upgrade, no dumps are required. But even with the\nbug, it can save you from unknowingly foisting inaccurate data on your\ncustomers which is still a good thing.\n\nAs for whether postgress should implicitly roll back, I don't think it\nshould - remember that the frontend, which is very likely operating in\nrobot mode, is still firing queries at the database. An inpmlicit\nrollback means starting a new transaction. And that could lead to a\ndata integrity problem as well.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n",
"msg_date": "Fri, 25 Feb 2000 17:24:57 -0500",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\n\nOn Fri, 25 Feb 2000, Karl DeBisschop wrote:\n\n> \n> > From: <[email protected]>\n> > On Fri, 25 Feb 2000, Karl DeBisschop wrote:\n> >\n> > > \n> > > >>To summarize, I stated that the following does not work with\n> > > >>postgresql:\n> > > >>\n> > > >>> $dbh->{AutoCommit} = 0;\n> > > >>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> > > >>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > > >>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> > > >>> $dbh->commit;\n> > > >>> $dbh->disconnect;\n> > > >>\n> > > \n> > > The usefulness of the idion is that in a mutli-user environment, this\n> > > is a basic way to update data that may or may not already have a key\n> > > in the table. You can't do a \"SELECT COUNT\" because in the time\n> > > between when you SELECT and INSERT (assuming the key is not already\n> > > there) someone may have done a separate insert. The only other way I\n> > > know to do this is to lock the entire table against INSERTs which has\n> > > obvious performance effects.\n> \n> > sounds right, but ;-) why you use the transaction in the first place? \n> \n> Rememeber that this is just an example to illustrate what sort of\n> behaviour one user would find useful in tranasctions, so it is a\n> little simplistic. Not overly simplistic, though, I think.\n> \n> I'd want a transaction because I'm doing a bulk insert into this live\n> database - say syncing in a bunch of data from a slave server while\n> the master is still running. If one (or more) insert(s) fail, I want\n> to revert back to the starting pint so I can fix the cause of the\n> failed insert and try again with the database in a known state.\n> (there may, for instance, be relationships beteewn the b field such\n> that if only part of the bulk insert suceeds, the database is rendered\n> corrupt).\n> \nthanks. I'm on your side now ;-) -- it is a useful senario. \nthe question are: 1) can nested transaction be typically interpreted \nto handle this situation? If is is, then, it should be handled by that\n\"advanced feature\", not plain transaction ;\n 2) on the other hand, can sql92's (plain) transaction be interpreted \nin the way that above behavior is legitimate?\n\n",
"msg_date": "Fri, 25 Feb 2000 17:40:09 -0600 (CST)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "[email protected] wrote:\n> \n> On Fri, 25 Feb 2000, Karl DeBisschop wrote:\n> \n> >\n> > > From: <[email protected]>\n> > > On Fri, 25 Feb 2000, Karl DeBisschop wrote:\n> > >\n> > > >\n> > > > >>To summarize, I stated that the following does not work with\n> > > > >>postgresql:\n> > > > >>\n> > > > >>> $dbh->{AutoCommit} = 0;\n> > > > >>> $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n> > > > >>> $rtv = $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n> > > > >>> if ($rtv) {$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n> > > > >>> $dbh->commit;\n> > > > >>> $dbh->disconnect;\n> > > > >>\n> > > >\n> > > > The usefulness of the idion is that in a mutli-user environment, this\n> > > > is a basic way to update data that may or may not already have a key\n> > > > in the table. You can't do a \"SELECT COUNT\" because in the time\n> > > > between when you SELECT and INSERT (assuming the key is not already\n> > > > there) someone may have done a separate insert. The only other way I\n> > > > know to do this is to lock the entire table against INSERTs which has\n> > > > obvious performance effects.\n> >\n> > > sounds right, but ;-) why you use the transaction in the first place?\n> >\n> > Rememeber that this is just an example to illustrate what sort of\n> > behaviour one user would find useful in tranasctions, so it is a\n> > little simplistic. Not overly simplistic, though, I think.\n> >\n> > I'd want a transaction because I'm doing a bulk insert into this live\n> > database - say syncing in a bunch of data from a slave server while\n> > the master is still running. If one (or more) insert(s) fail, I want\n> > to revert back to the starting pint so I can fix the cause of the\n> > failed insert and try again with the database in a known state.\n> > (there may, for instance, be relationships beteewn the b field such\n> > that if only part of the bulk insert suceeds, the database is rendered\n> > corrupt).\n> >\n> thanks. I'm on your side now ;-) -- it is a useful senario.\n> the question are: 1) can nested transaction be typically interpreted\n> to handle this situation? If is is, then, it should be handled by that\n> \"advanced feature\", not plain transaction ;\n\nI guess like this (got rid of AutoCommit, because that looks funny\nnested):\n\n$dbh->RaiseError = 1;\n$dbh->StartTransaction;\neval {\n $dbh->do(\"CREATE TABLE tmp (a int unique,b int)\");\n while (blahblahblah) {\n $dbh->StartTransaction;\n eval {\n $dbh->do(\"INSERT INTO tmp VALUES ($1,$2)\");\n };\n if ($@) {\n\t$dbh->Rollback;\n \t{$dbh->do(\"UPDATE tmp SET b=$2 where a=$1\")};\n } else {\n\t$dbh->Commit;\n }\n }\n}\nif ($@) {\n\t$dbh->rollback;\n} else {\n\t$dbh->commit;\n}\n$dbh->disconnect;\n\nI.e., try the INSERT within the inner transaction; if it fails, roll it\nback and do the UPDATE; if that fails, blow out the whole outer\ntransaction.\n\nYou could do the whole thing checking a return value as in the original\nexample, but the eval and RaiseError are canonical, according the the\ndocs.\n\n> 2) on the other hand, can sql92's (plain) transaction be interpreted\n> in the way that above behavior is legitimate?\n>\nWell, I'm not sure of the necessity of nested transactions in the case\nof continuing a transaction after a single-row insert has failed, but\nthat's implementation details I'm not familiar with... i.e., I'm not\nhaving to code the danged thing!\n",
"msg_date": "Mon, 28 Feb 2000 10:44:29 -0600",
"msg_from": "\"Keith G. Murphy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: [HACKERS] TRANSACTIONS"
}
] |
[
{
"msg_contents": "Can someone advise, please, how to deal with this problem in 6.5.3?\n\nThis is the second message, after debugging was enabled in the backend:\n\n------- Forwarded Message\n\nDate: Tue, 22 Feb 2000 15:28:44 +0100\nFrom: \"=?iso-8859-2?B?VmxhZGlt7XIgQmVuZbk=?=\" <[email protected]>\nTo: \"Oliver Elphick\" <[email protected]>, <[email protected]>\ncc: \"=?iso-8859-2?Q?M=FChlpachr_Michal?=\" <[email protected]>\nSubject: Re: Bug#58689: Problem: database connection termination while processi\n\t ng select command\n\nHi,\n\n I tried this and this problem is reported in the log here:\n\nquery: select comm_type,name,tot_bytes,tot_packets from\nflow_sums_days_send_200002_view where day='2000-02-21' and name not l\nProcessQuery\nFATAL 1: Memory exhausted in AllocSetAlloc()\n\n This message is invoked by unsuccess malloc() operation :-(\n\n\n Well, I tried to use instead of simple select command this:\n\ncreate temporary table xx as select ...\ncreate table yy as select\nselect ... into zz from ...\n\n I expected that Postgres will use less memory and that he will\ncontinously send data to new tables but not. This hasn't any effect.\n\n Command \"top\" wrote this report while my select ran:\n\nCPU states: 98.6% user, 1.3% system, 0.0% nice, 0.0% idle\nMem: 127256K av, 124316K used, 2940K free, 29812K shrd, 2620K buff\nSwap: 128516K av, 51036K used, 77480K free 7560K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND\n2942 postgres 20 0 141M 99M 17348 R 0 99.0 80.4 1:22 postmaster\n\n Thank You, V. Benes\n\n- -----P�vodn� zpr�va-----\nOd: Oliver Elphick <[email protected]>\nKomu: Vladim�r Bene� <[email protected]>; [email protected]\n<[email protected]>\nDatum: 22. �nora 2000 15:04\nP�edm�t: Re: Bug#58689: Problem: database connection termination while\nprocessing select command\n\n\n>\"=?iso-8859-2?B?VmxhZGlt7XIgQmVuZbk=?=\" wrote:\n> >Package: postgresql\n> >Version: 6.5.3-11\n> >\n> >\n> >\n> > Postgres reports error:\n> >\"pg.error pqReadData() -- backend closed the channel unexpectedly. This\n> >probably means the backend terminated abnormally before or while\nprocessing\n> >the request.\"\n>\n>\n>Please turn on debugging in the backend by editing\n>/etc/postgresql/postmaster.init and setting the value of PGDEBUG to 2; you\n>should also turn on PGECHO.\n>\n>Then restart the postmaster (/etc/init.d/postgresql restart), rerun\n>the query and examine the end of the log to see what error is reported\n>there.\n\n------- End of Forwarded Message\n\n\nand this was the original message:\n\n------- Forwarded Message\n\nDate: Tue, 22 Feb 2000 12:07:42 +0100\nFrom: \"=?iso-8859-2?B?VmxhZGlt7XIgQmVuZbk=?=\" <[email protected]>\nTo: <[email protected]>\ncc: \"=?iso-8859-2?Q?M=FChlpachr_Michal?=\" <[email protected]>\nSubject: Bug#58689: Problem: database connection termination while processing s\n\t elect command\n\nPackage: postgresql\nVersion: 6.5.3-11\n\n\n\n Postgres reports error:\n\"pg.error pqReadData() -- backend closed the channel unexpectedly. This\nprobably means the backend terminated abnormally before or while processing\nthe request.\"\n\n This error is produced by processing of this correct instruction:\nselect comm_type,name,tot_bytes,tot_packets\nfrom flow_sums_days_send_200002_view\nwhere day='2000-02-21' and name not like '@%'\nunion all\nselect comm_type,name,tot_bytes,tot_packets\nfrom flow_sums_days_receive_200002_view\nwhere day='2000-02-21' and name not like '@%'\n\n Both flow_sums_days_send_200002_view and\nflow_sums_days_receive_200002_view are views upon table with very many rows\n(today about 3 000 000). I guess limit of this data count about 10 000 000\nrows.\n\n This operation can run arbitrary long - never mind. The program\nproviding this select (one times per day) inserts every 5 minut new data\ninto this table.\n\n I tried stop this program (daemon) and then I ran this select from psql\n(with clause \"limit 10\"). It was success (no database session termination).\n\n I'am sure that any TIMEOUT expired.\n\n Perhaps cause of this problem is \"commuting\" of insert commands at time\nwhen this select is executed. I can remove clause \"union all\" and the\nprogram can perform sleep instruction before select processing BUT then this\nproblem will occures later again.\n\n My environment at /etc/postgresql/postmaster.init:\nPGBACKENDCOUNT=64\nPGBUFFERS=2048\nPGSORTMEM=262144\nKERNEL_FILE_MAX=1032\n\n\n\n Thank You very much, V. Benes\n\n___________________________________________________\n Ing. Vladimir Benes, pvt.net\n PVT, a.s., OZ Chomutov\n e-mail: [email protected], [email protected]\n\n\n------- End of Forwarded Message\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"The LORD bless thee, and keep thee; The LORD make his\n face shine upon thee, and be gracious unto thee; The \n LORD lift up his countenance upon thee, and give thee \n peace.\" Numbers 6:24-26 \n\n\n",
"msg_date": "Tue, 22 Feb 2000 15:00:29 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of memory problem (forwarded bug report)"
},
{
"msg_contents": "\"Oliver Elphick\" <[email protected]> writes:\n> Can someone advise, please, how to deal with this problem in 6.5.3?\n\nMy guess is that the cause is memory leaks during expression evaluation;\nbut without seeing the complete view definitions and underlying table\ndefinitions, it's impossible to know what processing is being invoked\nby this query...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 12:06:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Out of memory problem (forwarded bug report) "
}
] |
[
{
"msg_contents": "\nFeb 22, 2000\n\nToday, after 8 months of intensive development since our last release,\nthe PostgreSQL Global Development Group is proud to announce the first\nBeta release of PostgreSQL 7.0. \n\nAvailable at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\nmirror sites around the world, this represents our first public view of\nthe upcoming release, schedualed for April 1st.\n\nPlease report any bugs to [email protected] ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n",
"msg_date": "Tue, 22 Feb 2000 11:24:26 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v7.0 goes Beta ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n> mirror sites around the world, this represents our first public view of\n> the upcoming release, schedualed for April 1st.\n\nHmm, is it bad luck to plan a release for April Fools' Day?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 11:56:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.0 goes Beta ... "
},
{
"msg_contents": "On Tue, 22 Feb 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n> > mirror sites around the world, this represents our first public view of\n> > the upcoming release, schedualed for April 1st.\n> \n> Hmm, is it bad luck to plan a release for April Fools' Day?\n\nHrmmmm...since we've yet to actually release when we state we will, maybe\nwe'll release on time? :)\n\n\n",
"msg_date": "Tue, 22 Feb 2000 13:16:12 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.0 goes Beta ... "
},
{
"msg_contents": "I am working on it now. Give me a day.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Marvelous.\n> \n> May I ask what is new about v. 7.0?\n> \n> Duncan C. Kinder\n> [email protected]\n> \n> \n> ----- Original Message -----\n> From: The Hermit Hacker <[email protected]>\n> To: <[email protected]>\n> Cc: <[email protected]>\n> Sent: Tuesday, February 22, 2000 7:24 AM\n> Subject: [ANNOUNCE] PostgreSQL v7.0 goes Beta ...\n> \n> \n> >\n> > Feb 22, 2000\n> >\n> > Today, after 8 months of intensive development since our last release,\n> > the PostgreSQL Global Development Group is proud to announce the first\n> > Beta release of PostgreSQL 7.0.\n> >\n> > Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n> > mirror sites around the world, this represents our first public view of\n> > the upcoming release, schedualed for April 1st.\n> >\n> > Please report any bugs to [email protected] ...\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org\n> >\n> >\n> >\n> > ************\n> >\n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 12:46:38 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL v7.0 goes Beta ..."
},
{
"msg_contents": "At 11:56 AM 2/22/00 -0500, Tom Lane wrote:\n>The Hermit Hacker <[email protected]> writes:\n>> Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n>> mirror sites around the world, this represents our first public view of\n>> the upcoming release, schedualed for April 1st.\n>\n>Hmm, is it bad luck to plan a release for April Fools' Day?\n\nIt has always been my favorite release date. We plan to release\nthe first full PG port of Ars Digita's web toolkit, including ecommmerce,\non April Fools Day. I didn't realize that was the offical\nPG7.0 release date but since we plan to use PG7.0 as our supported\nplatform I like the date even more!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 10:49:54 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.0 goes Beta ... "
},
{
"msg_contents": "Marvelous.\n\nMay I ask what is new about v. 7.0?\n\nDuncan C. Kinder\[email protected]\n\n\n----- Original Message -----\nFrom: The Hermit Hacker <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, February 22, 2000 7:24 AM\nSubject: [ANNOUNCE] PostgreSQL v7.0 goes Beta ...\n\n\n>\n> Feb 22, 2000\n>\n> Today, after 8 months of intensive development since our last release,\n> the PostgreSQL Global Development Group is proud to announce the first\n> Beta release of PostgreSQL 7.0.\n>\n> Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n> mirror sites around the world, this represents our first public view of\n> the upcoming release, schedualed for April 1st.\n>\n> Please report any bugs to [email protected] ...\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n>\n>\n>\n> ************\n>\n\n",
"msg_date": "Tue, 22 Feb 2000 11:36:27 -0800",
"msg_from": "\"Duncan Kinder\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL v7.0 goes Beta ..."
},
{
"msg_contents": "Tom Lane wrote:\n> \n> The Hermit Hacker <[email protected]> writes:\n> > Available at ftp://ftp.postgresql.org/pub/postgresql-7.0beta1.tar.gz and\n> > mirror sites around the world, this represents our first public view of\n> > the upcoming release, schedualed for April 1st.\n> \n> Hmm, is it bad luck to plan a release for April Fools' Day?\n\nThe benefit is that if you accidently stuff up and people lose all their\ndata, you can always fall back to \"APRIL FOOL!\". :-)\n",
"msg_date": "Wed, 23 Feb 2000 09:28:14 +1100",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.0 goes Beta ..."
}
] |
[
{
"msg_contents": "\n Hi,\n\n as I said, I tring implement PREPARE / EXECUTE command for user a\ncontrollable query cache (in TODO: Cache most recent query plan(s)).\n\nI have implement first usable version now (I know that it is not\ninteresting for current feature-freeze state, but I believe that\nit is interesting for next release and for major developers). See:\n\n\ntest=# prepare sel as select * from tab where id = $1 and data \n like $2 using int, text;\nPREPARE\ntest=# execute sel using 1, '%a';\n id | data\n----+------\n 1 | aaaa\n(1 row)\n\ntest=# prepare ins as insert into tab (data) values($1) using text;\nPREPARE\ntest=# execute ins_tab using 'cccc';\nINSERT 18974 1\n \n\nThe queryTree and planTree are save in hash table and in the \nTopMemoryContext (Is it good space for this cache?). All is\nwithout change-schema detection (IMHO is user problem if he\nchanges DB schema and use old cached plan). In future I try\nadd any 'change-schema' detection (to alter/drop table,rule..etc).\n\n\nI'am not sure with syntax, now is:\n\n PREPARE name AS optimizable-statement [ USING type, ... ]\n EXECUTE name [ USING value, ... ] \t\n\nComments? Suggestions? (SQL92?)\n\n(Note: I try test speed and speed for cached query plan (select) executed \n via EXECUTE rise very very up (70% !).) \n\n\n\t\t\t\t\t\tKarel\t\t\t\t\t\t\n\n\t\t\t\t\t\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n",
"msg_date": "Tue, 22 Feb 2000 16:48:35 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Karel Zak - Zakkr wrote:\n\n> The queryTree and planTree are save in hash table and in the \n> TopMemoryContext (Is it good space for this cache?). All is\n> without change-schema detection (IMHO is user problem if he\n> changes DB schema and use old cached plan). In future I try\n\nJust curious, but a new 'PREPARE name AS...' with the same name just\noverrides the previously saved plan?\n\nActually, can someone who may know the internals of DBI comment on\nthis? If I have a CGI that runs the same SELECT call each and every time,\nthis would come in handy ... but how does DBI do its prepare? Would it\nset a new name for each invocation, so you would have several 'cached\nplans' for the exact same SELECT call?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 22 Feb 2000 12:27:40 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "\nOn Tue, 22 Feb 2000, The Hermit Hacker wrote:\n\n> On Tue, 22 Feb 2000, Karel Zak - Zakkr wrote:\n> \n> > The queryTree and planTree are save in hash table and in the \n> > TopMemoryContext (Is it good space for this cache?). All is\n> > without change-schema detection (IMHO is user problem if he\n> > changes DB schema and use old cached plan). In future I try\n> \n> Just curious, but a new 'PREPARE name AS...' with the same name just\n> overrides the previously saved plan?\n\n Current code return you:\n\ntest=# prepare one as select * from aaa;\nPREPARE\ntest=# prepare one as select * from aaa;\nERROR: Query plan with name 'one' already exist.\ntest=#\n\n I prefer any DROP command instead overriding. But I open for any other\nsuggestions...\n\n> Actually, can someone who may know the internals of DBI comment on\n> this? If I have a CGI that runs the same SELECT call each and every time,\n> this would come in handy ... but how does DBI do its prepare? Would it\n> set a new name for each invocation, so you would have several 'cached\n> plans' for the exact same SELECT call?\n\n I not sure if I good understand you. But..\n\n 1/ this cache is in memory only (it is not across re-connection persistent), \n not save in any table..etc. \n 2/ you can have (equil or differnet) several plans in this cache, number of\n plans is not limited.\n 3/ you can't have two same query's name in cache (name is hash key)\n 4/ after EXECUTE is plan still in cache, you can run it again... \n\n potential usage:\n\n example - you start connection to PG and you know that you need use \n20x same question (example INSERT). You can PREPARE plan for this query,\nand run fast EXECUTE only (instead 20x full insert);\n \n\t\t\t\t\t\tKarel \n\n\n \n\n\t \n\n",
"msg_date": "Tue, 22 Feb 2000 18:12:22 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "Karel Zak - Zakkr <[email protected]> writes:\n> as I said, I tring implement PREPARE / EXECUTE command for user a\n> controllable query cache (in TODO: Cache most recent query plan(s)).\n\nLooks cool.\n\n> The queryTree and planTree are save in hash table and in the \n> TopMemoryContext (Is it good space for this cache?).\n\nProbably not. I'd suggest making a separate memory context for\nthis purpose --- they're cheap, and that gives you more control.\nLook at the creation and use of CacheMemoryContext for an example.\n\n> I'am not sure with syntax, now is:\n\n> PREPARE name AS optimizable-statement [ USING type, ... ]\n> EXECUTE name [ USING value, ... ] \t\n\n> Comments? Suggestions? (SQL92?)\n\nThis seems to be quite at variance with SQL92, unfortunately, so it\nmight not be a good idea to use the same keywords they do...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 12:16:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "\nOn Tue, 22 Feb 2000, Tom Lane wrote:\n\n> Karel Zak - Zakkr <[email protected]> writes:\n> > as I said, I tring implement PREPARE / EXECUTE command for user a\n> > controllable query cache (in TODO: Cache most recent query plan(s)).\n> \n> Looks cool.\n\nThanks.\n\n> \n> > The queryTree and planTree are save in hash table and in the \n> > TopMemoryContext (Is it good space for this cache?).\n> \n> Probably not. I'd suggest making a separate memory context for\n> this purpose --- they're cheap, and that gives you more control.\n> Look at the creation and use of CacheMemoryContext for an example.\n\n Yes, I agree (TopMemoryContext was simpl for first hacking). \nBut I not sure how create new (across transaction persistent?) \nMemoryContext. It needs new portal? (Sorry I not thoroughly explore\nPG's memory management.) \n\n> \n> > I'am not sure with syntax, now is:\n> \n> > PREPARE name AS optimizable-statement [ USING type, ... ]\n> > EXECUTE name [ USING value, ... ] \t\n> \n> > Comments? Suggestions? (SQL92?)\n> \n> This seems to be quite at variance with SQL92, unfortunately, so it\n> might not be a good idea to use the same keywords they do...\n\n Hmm, I inspire with Jan's TODO item. What use:\n\n\tCREATE PLAN \n\tDROP PLAN\n\tEXECUTE PLAN\n\n IMHO these kaywords are better.\n\t\t\t\t\t\tKarel\n \n\n",
"msg_date": "Tue, 22 Feb 2000 18:30:48 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "At 06:30 PM 2/22/00 +0100, Karel Zak - Zakkr wrote:\n\n> Yes, I agree (TopMemoryContext was simpl for first hacking). \n>But I not sure how create new (across transaction persistent?) \n>MemoryContext. It needs new portal? (Sorry I not thoroughly explore\n>PG's memory management.) \n\nJan is caching the plans needed for referential integrity checking\nand referential actions - look at ri_triggers.c in src/backend/utils/adt.\nri_InitHashTables initializes the RI cache.\n\n(I *assume* Jan, with his great experience, is doing it right, I'm\nin no position to judge!)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 22 Feb 2000 10:56:07 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "\nOn Tue, 22 Feb 2000, Don Baccus wrote:\n\n> At 06:30 PM 2/22/00 +0100, Karel Zak - Zakkr wrote:\n> \n> > Yes, I agree (TopMemoryContext was simpl for first hacking). \n> >But I not sure how create new (across transaction persistent?) \n> >MemoryContext. It needs new portal? (Sorry I not thoroughly explore\n> >PG's memory management.) \n> \n> Jan is caching the plans needed for referential integrity checking\n> and referential actions - look at ri_triggers.c in src/backend/utils/adt.\n> ri_InitHashTables initializes the RI cache.\n\n My cache table routines for PREPARE = Jan's RI routines :-) \n(I copy and a little modify Jan's code (*Thanks* Jan for good inspiration..).\n\nBut if I good look at Jan use SPI context for this, not any specific\ncontext. \n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Tue, 22 Feb 2000 21:18:47 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Karel Zak - Zakkr <[email protected]> writes:\n> > as I said, I tring implement PREPARE / EXECUTE command for user a\n> > controllable query cache (in TODO: Cache most recent query plan(s)).\n> \n> Looks cool.\n> \n> > The queryTree and planTree are save in hash table and in the \n> > TopMemoryContext (Is it good space for this cache?).\n> \n> Probably not. I'd suggest making a separate memory context for\n> this purpose --- they're cheap, and that gives you more control.\n> Look at the creation and use of CacheMemoryContext for an example.\n>\n\nHmm,shoudn't per plan memory context be created ?\n\nThough current SPI stuff saves prepared plans to TopMemory\nContext,we couldn't remove them forever. It seems that SPI \nshould also be changed in its implementation about saving\nplans.\n\nNote that freeObject() is unavailable at all.\nWe would be able to free PREPAREd resources by destroying \ncorrsponding memory context.\n\nIf I recognize Jan's original idea correctly,he also suggested\nthe same way.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n",
"msg_date": "Wed, 23 Feb 2000 17:50:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "\n> Though current SPI stuff saves prepared plans to TopMemory\n> Context,we couldn't remove them forever. It seems that SPI \n> should also be changed in its implementation about saving\n> plans.\n\n Yes, I know about SPI plan saving... from here is my inspiration\nwith TopMemoryContext. But we have in current PG code very often\nany cached queryPlan/Tree (PREPARE, SPI and Jan's RI saves plans\nto TopM. too), I agree with Tom that is not bad idea saving all\nplans to _one_ specific MemoryContext. \n\n My idea is make any basic routines for query cache (hash table,\nExecuteCachedQuery() ...etc) and use these routines for more\noperation (SPI, FKeys, PREPARE..). Comments?\n\n> Note that freeObject() is unavailable at all.\n> We would be able to free PREPAREd resources by destroying \n> corrsponding memory context.\n\n If I good understand, we can't destroy any plan? We must \ndestroy _full_ memory context? If yes (please no), we can't\nmake a DROP PLAN command or we must create for each plan specific\nmemory context (and drop this specific Context (Jan's original idea)).\n\n If I call SPI_saveplan(), is the plan forever save in \nTopMemoryContext? (hmm, the SPI is memory feeder).\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Wed, 23 Feb 2000 11:26:27 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Karel Zak - Zakkr [mailto:[email protected]]\n> \n> > Though current SPI stuff saves prepared plans to TopMemory\n> > Context,we couldn't remove them forever. It seems that SPI \n> > should also be changed in its implementation about saving\n> > plans.\n> \n> Yes, I know about SPI plan saving... from here is my inspiration\n> with TopMemoryContext. But we have in current PG code very often\n> any cached queryPlan/Tree (PREPARE, SPI and Jan's RI saves plans\n> to TopM. too), I agree with Tom that is not bad idea saving all\n> plans to _one_ specific MemoryContext. \n> \n> My idea is make any basic routines for query cache (hash table,\n> ExecuteCachedQuery() ...etc) and use these routines for more\n> operation (SPI, FKeys, PREPARE..). Comments?\n> \n> > Note that freeObject() is unavailable at all.\n> > We would be able to free PREPAREd resources by destroying \n> > corrsponding memory context.\n> \n> If I good understand, we can't destroy any plan? We must\n\nI think so. The problem is that Node struct couldn't be freed safely\ndue to the lack of reference count in its definition. As far as I see\nplans could be destroyed only when the memory context in which\nthey are placed are destroyed.\n\n> destroy _full_ memory context? If yes (please no), we can't\n> make a DROP PLAN command or we must create for each plan specific\n> memory context (and drop this specific Context (Jan's original idea)).\n>\n\nYou can DROP a PLAN by removing its hash entry but of cource\nthere remains memory leak. \n\n> If I call SPI_saveplan(), is the plan forever save in \n> TopMemoryContext? (hmm, the SPI is memory feeder).\n>\n\nProbably.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 24 Feb 2000 01:19:57 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I think so. The problem is that Node struct couldn't be freed safely\n> due to the lack of reference count in its definition. As far as I see\n> plans could be destroyed only when the memory context in which\n> they are placed are destroyed.\n\nThis is overly conservative. It should be safe to destroy a plan tree\nvia freeObject() if it was created via copyObject() --- and that is\ncertainly how the plan would get into a permanent memory context.\n\nCurrently, rule definitions are leaked in CacheContext at relcache\nflushes. I plan to start freeing them via freeObject at the beginning\nof the 7.1 development cycle --- I didn't want to risk it during the\nrunup to 7.0, but I believe it will work fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 11:53:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "\nOn Wed, 23 Feb 2000, Tom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I think so. The problem is that Node struct couldn't be freed safely\n> > due to the lack of reference count in its definition. As far as I see\n> > plans could be destroyed only when the memory context in which\n> > they are placed are destroyed.\n> \n> This is overly conservative. It should be safe to destroy a plan tree\n> via freeObject() if it was created via copyObject() --- and that is\n> certainly how the plan would get into a permanent memory context.\n\nYes, SPI and my PREPARE use copyObject() for saving to TopMemoryContext.\n\nWell, I believe you Tom that freeObject() is correct and I start \nimplement PlanCacheMemoryContext's routines for PREPARE (and\nSPI's saveplan ?). \n\n\t\t\t\t\t\tKarel Z.\n\n",
"msg_date": "Wed, 23 Feb 2000 18:11:22 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I think so. The problem is that Node struct couldn't be freed safely\n> > due to the lack of reference count in its definition. As far as I see\n> > plans could be destroyed only when the memory context in which\n> > they are placed are destroyed.\n> \n> This is overly conservative. It should be safe to destroy a plan tree\n> via freeObject() if it was created via copyObject() --- and that is\n> certainly how the plan would get into a permanent memory context.\n>\n\nI proposed the implementation of copyObject() which keeps the\nreferences among objects once before. It seems unnatural to me\nthat such kind of implementation would never be allowed by this\nrestriction. \nWhy is memory context per plan bad ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n\n",
"msg_date": "Thu, 24 Feb 2000 02:34:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I think so. The problem is that Node struct couldn't be freed safely\n> > due to the lack of reference count in its definition. As far as I see\n> > plans could be destroyed only when the memory context in which\n> > they are placed are destroyed.\n>\n> This is overly conservative. It should be safe to destroy a plan tree\n> via freeObject() if it was created via copyObject() --- and that is\n> certainly how the plan would get into a permanent memory context.\n>\n> Currently, rule definitions are leaked in CacheContext at relcache\n> flushes. I plan to start freeing them via freeObject at the beginning\n> of the 7.1 development cycle --- I didn't want to risk it during the\n> runup to 7.0, but I believe it will work fine.\n\n I don't see any reason, why each saved plan or rule\n definition shouldn't go into it's own, private memory\n context. Then, a simple destruction of the entire context\n will surely free all it's memory, and I think it will also be\n faster since the en-block allocation, done for many small\n objects, doesn't need to free all them separately - it throws\n away the entire blocks. No need to traverse the node tree,\n nor any problems with multiple object references inside the\n tree.\n\n Since plans are (ought to be) saved via SPI_saveplan(plan),\n there is already a central point where it can be done for\n plans. And a corresponding SPI_freeplan(savedplan) should be\n easy to create, since the context can be held in the SPI plan\n structure itself.\n\n Needs only some general naming convention for these memory\n contexts. But something like a\n\n MemoryContext CreateObjectMemoryContext();\n\n that guarantees uniqueness in the context name and no\n conflicts by using some appropriate prefix in them should do\n it.\n\n The overhead, payed for separate contexts is IMHO negligible.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 23 Feb 2000 19:22:16 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "\nOn Thu, 24 Feb 2000, Hiroshi Inoue wrote:\n\n> > This is overly conservative. It should be safe to destroy a plan tree\n> > via freeObject() if it was created via copyObject() --- and that is\n> > certainly how the plan would get into a permanent memory context.\n> >\n> \n> I proposed the implementation of copyObject() which keeps the\n> references among objects once before. It seems unnatural to me\n> that such kind of implementation would never be allowed by this\n> restriction. \n>\n> Why is memory context per plan bad ?\n\n One context is more simple. \n\n We talking about a *cache*. If exist interface for this cache and\n all operations are with copy/freeObject it not has restriction. \n \n For how action it will restriction? \n\n The PlanCacheMemoryContext will store space only, it isn't space for \n any action.\n\n\n\t\t\t\t\t\tKarel Z.\n\n\n\n",
"msg_date": "Wed, 23 Feb 2000 19:48:25 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "Karel wrote:\n\n> > Why is memory context per plan bad ?\n>\n> One context is more simple.\n\n I don't see much complexity difference between one context\n per plan vs. one context for all. At least if we do it\n transparently inside of SPI_saveplan() and SPI_freeplan().\n\n> We talking about a *cache*. If exist interface for this cache and\n> all operations are with copy/freeObject it not has restriction.\n>\n> For how action it will restriction?\n\n No restrictions I can see.\n\n But I think one context per plan is still better, since first\n there is no leakage/multiref problem. Second, there is a\n performance difference between explicitly pfree()'ing\n hundreds of small allocations (in freeObject() traverse), and\n just destroying a context. The changes I made to the\n MemoryContextAlloc stuff for v6.5 (IIRC), using bigger blocks\n incl. padding/reuse for small allocations, caused a speedup\n of 5+% for the entire regression test. This was only because\n it uses lesser real calls to malloc()/free() and the context\n destruction does not need to process a huge list of all,\n however small allocations anymore. It simply throws away all\n blocks now.\n\n This time, we talk about a more complex, recursive\n freeObject(), switch()'ing for every node type into separate,\n per object type specific functions, pfree()'ing all the\n little chunks. So there is at least a difference in\n first/second-level RAM cache rows required. And if that can\n simply be avoided by using one context per plan, I vote for\n 1by1.\n\n Then again, copyObject/freeObject must be fixed WRT\n leakage/multiref anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 23 Feb 2000 21:11:08 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> But I think one context per plan is still better, since first\n> there is no leakage/multiref problem. Second, there is a\n> performance difference between explicitly pfree()'ing\n> hundreds of small allocations (in freeObject() traverse), and\n> just destroying a context.\n\nAgreed, though one would hope that performance of cache flushes\nis not a major consideration ;-).\n\nWhat I find attractive about going in this direction is the idea\nthat we could get rid of freeObject() entirely, and eliminate that\npart of the work involved in changing node definitions.\n\n> Then again, copyObject/freeObject must be fixed WRT\n> leakage/multiref anyway.\n\nNot if we decide to get rid of freeObject, instead.\n\nI think that a little work would have to be done to support efficient\nuse of large numbers of contexts, but it's certainly doable. This\npath seems more attractive than trying to make the world safe for\nfreeObject of arbitrary node trees.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 17:40:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > Then again, copyObject/freeObject must be fixed WRT\n> > leakage/multiref anyway.\n>\n> Not if we decide to get rid of freeObject, instead.\n>\n> I think that a little work would have to be done to support efficient\n> use of large numbers of contexts, but it's certainly doable. This\n> path seems more attractive than trying to make the world safe for\n> freeObject of arbitrary node trees.\n\n Yes, little work to build the framework. All\n alloc/realloc/free functions for a particular context are\n just function-pointers inside the context structure itself.\n So ther'll be no additional call overhead when dealing with\n large numbers of contexts.\n\n OTOH, this new per-object-context stuff could hand down some\n lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND\n and MCXT_UNTIL_INFINITY to start from. The memory context\n creation/destruction routines could manage some global lists\n of contexts, that automatically get destroyed on\n AtXactCommitMemory and so on, making such a kind of per-\n object memory context a fire'n'forget missile (Uh - played\n F15 too excessively :-). It should still be destroyed\n explicitly if not needed anymore, but if allocated with the\n correct lifetime, wouldn't hurt that much if forgotten.\n\n More work to get all the existing places in the backend\n making use of this functionality where applicable.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 24 Feb 2000 00:21:24 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> OTOH, this new per-object-context stuff could hand down some\n> lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND\n> and MCXT_UNTIL_INFINITY to start from.\n\nA good thing to keep in mind, but for the short term I'm not sure\nwe need it; the proposed new contexts are all for indefinite-lifetime\ncaches, so there's no chance to make them go away automatically.\nEventually we might have more uses for limited-lifetime contexts,\nthough.\n\nSomething else that needs to be looked at is how memory contexts\nare tied to \"portals\" presently. That mechanism probably needs\nto be redesigned. I have to admit I don't understand what it's\nfor...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 18:38:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE) "
},
{
"msg_contents": "Tom Lane wrote:\n\n> [email protected] (Jan Wieck) writes:\n> > OTOH, this new per-object-context stuff could hand down some\n> > lifetime flag, let's say MCXT_UNTIL_STATEMENT, MCXT_UTIL_XEND\n> > and MCXT_UNTIL_INFINITY to start from.\n>\n> A good thing to keep in mind, but for the short term I'm not sure\n> we need it; the proposed new contexts are all for indefinite-lifetime\n> caches, so there's no chance to make them go away automatically.\n> Eventually we might have more uses for limited-lifetime contexts,\n> though.\n\n Sure, was only what I thought might be useful in some cases.\n If not used, would it hurt to have support for it either?\n Some unused List*'ers somewhere - nothing important.\n\n> Something else that needs to be looked at is how memory contexts\n> are tied to \"portals\" presently. That mechanism probably needs\n> to be redesigned. I have to admit I don't understand what it's\n> for...\n\n U2? Makes 2 of us.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 24 Feb 2000 01:16:31 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query (PREPARE/EXECUTE)"
},
{
"msg_contents": "On Wed, 23 Feb 2000, Jan Wieck wrote:\n> \n> I don't see much complexity difference between one context\n> per plan vs. one context for all. At least if we do it\n> transparently inside of SPI_saveplan() and SPI_freeplan().\n> \n\n Well, I explore PG's memory context routines and is probably more\nsimple destroy mem context (than use feeeObject()) and create new context\nfor plan is simple too. (Jan, Hiroshi and PG's source convince me :-)\n\n Today afternoon I rewrite query cache and now is implemented as \n'context-per-plan'. It allows me write a 'DROP PLAN' command. We can use \nthis cache in SPI too, and create new command SPI_freeplan() (and stop \nTopMemoryContext feeding).\n\n Now, PREPARE/EXECUTE are ready to usage. See:\n\ntest=# prepare my_plan as select * from tab where id = $1 using int;\nPREPARE\ntest=# execute my_plan using 2;\n id | data\n----+------\n 2 | aaaa\n(1 row)\n\ntest=# drop plan my_plan;\nDROP\ntest=# execute my_plan using 2;\nERROR: Plan with name 'my_plan' not exist\n \n \n I still not sure with PREPARE/EXECUTE keywords, I vote for:\n\n\tCREATE PLAN name AS query [ USING type, ... ]\n\tEXECUTE PLAN name [ USING values, ... ]\n\tDROP PLAN name\n\n Comments? (Please. I really not SQL's standard guru...)\n\n\t\t\t\t\t\tKarel\n \n\n",
"msg_date": "Thu, 24 Feb 2000 18:35:14 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "[HACKERS] Cache query implemented"
},
{
"msg_contents": "Karel Zak - Zakkr writes:\n\n> I still not sure with PREPARE/EXECUTE keywords, I vote for:\n> \n> \tCREATE PLAN name AS query [ USING type, ... ]\n> \tEXECUTE PLAN name [ USING values, ... ]\n> \tDROP PLAN name\n> \n> Comments? (Please. I really not SQL's standard guru...)\n\nSQL seems to have something like the following. (Note: The section on\ndynamic SQL is mostly incomprehensible to me.)\n\nPREPARE name AS query\nDESCRIBE INPUT name [ USING x, ... ]\nDESCRIBE [OUTPUT] name [ USING x, ... ]\nEXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\nDEALLOCATE PREPARE name\n\nI'm not sure if these match exactly what you're doing, but if it is at all\npossible to match what you're doing to these, I'd say it would be a shame\nnot to do it. You've got time.\n\nMeanwhile I'm wondering whether it would not be possible to provide the\nplan caching functionality even if all you do is send the same SELECT\ntwice in a row. Might be tricky, of course.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 26 Feb 2000 02:36:10 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Cache query implemented"
},
{
"msg_contents": "On Sat, 26 Feb 2000, Peter Eisentraut wrote:\n\n> Karel Zak - Zakkr writes:\n> \n> > I still not sure with PREPARE/EXECUTE keywords, I vote for:\n> > \n> > \tCREATE PLAN name AS query [ USING type, ... ]\n> > \tEXECUTE PLAN name [ USING values, ... ]\n> > \tDROP PLAN name\n> > \n> > Comments? (Please. I really not SQL's standard guru...)\n> \n> SQL seems to have something like the following. (Note: The section on\n> dynamic SQL is mostly incomprehensible to me.)\n\n I'am studing SQL92 just now. And I not sure if my idea is same as SQL92's\nPREPARE. My implementation is very simular with SPI's plan operations,\nand is designed as simple way to very fast query execution.\n\n> PREPARE name AS query\n\n In my PREPARE go query to parser and if in PG query is '$n', parser needs\n(Oid) argstypes array, hence it needs \n\t\n PREPARE name AS <query with parameters - $n> USING valuetype, ...\n\n But in SQL92 is PREPARE without \"USING valuetype, ...\".\n\n> DESCRIBE INPUT name [ USING x, ... ]\n> DESCRIBE [OUTPUT] name [ USING x, ... ]\n\nIt is probably used instead 'USING' in PREPARE. It specific columns\nfor select (OUTPUT) and INPUT specific values for parser ($n paremetrs\nin PG). \n\nPeople which define SQL92 must be crazy. This PREPARE concept split one\nquery plan to three commands. Who join it to one plan?.... \n\n\n> EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n\n This command \"Associate input parametrs and output targets with a prepared\nstatement and execute the statement\" (SQL92).\n\n 'INTO' - I really not sure if is possible in PG join more plans into\none plan. If I good understand, INTO is targetlist for cached \nquery, but in cached query is targetlist too. Is any way how join/replace \ntargetlist in cached query with targetlist from EXECUTE's INTO? \n(QueryRewrite?). But, INTO for EXECUTE is nod bad idea.\n \n> DEALLOCATE PREPARE name\n\nIt is better than 'DROP'.\n\n\n> Meanwhile I'm wondering whether it would not be possible to provide the\n> plan caching functionality even if all you do is send the same SELECT\n> twice in a row. Might be tricky, of course.\n\n Here, I'am not understand you.\n\n Exist any other SQL which has implemented a PREPARE/EXECUTE? \n(Oracle8 has not it, and other..?)\n\n I still vote for simple PREPARE/EXECUTE (or non-standard CREATE PLAN),\nbecause SQL92's PREPARE is not implementable :-)\n \n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 28 Feb 2000 12:30:45 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query implemented"
},
{
"msg_contents": "> > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n> \n> This command \"Associate input parametrs and output targets with a prepared\n> statement and execute the statement\" (SQL92).\n> \n> 'INTO' - I really not sure if is possible in PG join more plans into\n> one plan. If I good understand, INTO is targetlist for cached \n> query, but in cached query is targetlist too. Is any way how join/replace \n> targetlist in cached query with targetlist from EXECUTE's INTO? \n> (QueryRewrite?). But, INTO for EXECUTE is nod bad idea.\n\n Sorry, previous paragraph is stupid. The 'into' is simple item in \nthe query struct and not any targetlist. I spend more time with previous\nstupidity than with implementation: \n\n EXECUTE <name> \n\t[ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ] \n\t[ USING val, ... ]\n\n\ntest=# prepare sel as select * from tab;\nPREPARE\ntest=# execute sel into x;\nSELECT\ntest=# select * from x;\n id | data\n----+------\n 1 | aaaa\n 2 | bbbb\n 3 | cccc\n 4 | dddd\n 5 | eeee\n(5 rows)\n\n\n The PostgreSQL source code is really very modular :-)\n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 28 Feb 2000 15:03:17 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Cache query implemented"
},
{
"msg_contents": "> -----Original Message-----\n> From: Karel Zak - Zakkr [mailto:[email protected]]\n> \n> > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n> > \n> > This command \"Associate input parametrs and output targets \n> with a prepared\n> > statement and execute the statement\" (SQL92).\n> > \n\nI don't know well about PREPARE statement.\nBut is above syntax for interative SQL command ?\nIsn't it for embedded SQL or SQL module ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Tue, 29 Feb 2000 14:05:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Cache query implemented"
},
{
"msg_contents": "\nOn Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: Karel Zak - Zakkr [mailto:[email protected]]\n> > \n> > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n> > > \n> > > This command \"Associate input parametrs and output targets \n> > with a prepared\n> > > statement and execute the statement\" (SQL92).\n> > > \n> \n> I don't know well about PREPARE statement.\n> But is above syntax for interative SQL command ?\n> Isn't it for embedded SQL or SQL module ?\n\n - PREPARE save to cache any standard sql command (OptimizableStmt).\n - EXECUTE run this cached plan (query) and send data to frontend or\n INTO any relation.\n\n Or what you mean?\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Tue, 29 Feb 2000 13:51:35 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Cache query implemented"
},
{
"msg_contents": "> -----Original Message-----\n> From: Karel Zak - Zakkr [mailto:[email protected]]\n> \n> On Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n> \n> > > -----Original Message-----\n> > > From: Karel Zak - Zakkr [mailto:[email protected]]\n> > > \n> > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n> > > > \n> > > > This command \"Associate input parametrs and output targets \n> > > with a prepared\n> > > > statement and execute the statement\" (SQL92).\n> > > > \n> > \n> > I don't know well about PREPARE statement.\n> > But is above syntax for interative SQL command ?\n> > Isn't it for embedded SQL or SQL module ?\n> \n> - PREPARE save to cache any standard sql command (OptimizableStmt).\n> - EXECUTE run this cached plan (query) and send data to frontend or\n> INTO any relation.\n> \n> Or what you mean?\n>\n\nIn old Oracle(I don't know recent Oracle,sorry),PREPARE couldn't be called\nas an interactive SQL command. It was used only in embedded SQL.\n\nSeems x, y after INTO are output variables. In embedded SQL they are\nhost variables. But I don't know what they are in interactive SQL.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 1 Mar 2000 10:35:04 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Cache query implemented"
},
{
"msg_contents": "\nOn Wed, 1 Mar 2000, Hiroshi Inoue wrote:\n\n> > -----Original Message-----\n> > From: Karel Zak - Zakkr [mailto:[email protected]]\n> > \n> > On Tue, 29 Feb 2000, Hiroshi Inoue wrote:\n> > \n> > > > -----Original Message-----\n> > > > From: Karel Zak - Zakkr [mailto:[email protected]]\n> > > > \n> > > > > > EXECUTE name [ INTO x, y, ... ] [ USING a, b, ... ]\n> > > > > \n> > > > > This command \"Associate input parametrs and output targets \n> > > > with a prepared\n> > > > > statement and execute the statement\" (SQL92).\n> > > > > \n> > > \n> > > I don't know well about PREPARE statement.\n> > > But is above syntax for interative SQL command ?\n> > > Isn't it for embedded SQL or SQL module ?\n> > \n> > - PREPARE save to cache any standard sql command (OptimizableStmt).\n> > - EXECUTE run this cached plan (query) and send data to frontend or\n> > INTO any relation.\n> > \n> > Or what you mean?\n> >\n> \n> In old Oracle(I don't know recent Oracle,sorry),PREPARE couldn't be called\n> as an interactive SQL command. It was used only in embedded SQL.\n\n Oh, yes I understand you now. No, prepare is a standard command \n(interactive) (IMO).\n \n> Seems x, y after INTO are output variables. In embedded SQL they are\n> host variables. But I don't know what they are in interactive SQL.\n\n A INTO is same as (example) SELECT ..INTO, see:\n\n PREPARE myplan AS SELECT * FROM tab;\n EXECUTE myplan INTO newtab;\n\n A INTO only remove query destination for cached plan.\n\n ...it is in my implementation. I don't no how it is in any others SQLs.\nIn my Oracle8's tutorial it isn't. \n\n\t\t\t\t\t\tKarel\n\t\t\t\n\n",
"msg_date": "Wed, 1 Mar 2000 10:33:18 +0100 (CET)",
"msg_from": "Karel Zak - Zakkr <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Cache query implemented"
}
] |
[
{
"msg_contents": "Testing RPMs for the 7.0beta1 release are now available on\nftp.postgresql.org. Please note that these RPMS are BETA -- the\npackaging is still rough in spots. It would not be a good idea to try\nto upgrade a production system from the stable 6.5.3-3 RPM set to this\nset -- please use a development system to test with. The following are\nknown to be things that need fixing that are packaging related:\n\n1.)\tAlpha patches are needed -- Ryan K or Uncle George?\n2.)\tBetter logging support -- the current logging frankly stinks.\n3.)\tSmoother upgrade from previous releases -- the only major change is\nthe location of PGDATA, which is now /var/lib/pgsql/data, from\n/var/lib/pgsql -- you will need to move the data over manually.\n4.)\tCurrently I am not using pg_ctl -- this will be implemented in a\nfuture beta RPM.\n5.)\tThis release is lacking pl/perl due to my Linux system not building\nplperl.so -- I am looking into it, but I didn't want to delay the first\npreliminary release, as I want people to bang hard on these RPMS.\n6.)\tLogrotate functionality is implemented, but at high cost -- each\ntime the log is rolled, postmaster has to be restarted. If you do not\nwant log rolling, remove /etc/logrotate.d/postgres.\n7.)\tLogging is done to /var/log/postgresql -- however, for whatever\nreason postmaster still spouts debugging messages to the tty -- even\nafter a 2>&1 redirect. Logging is not at this time timestamped -- but\nit will be as I get feedback about the logging.\n\nIf you notice something I left out, let me know. Above all read\n/usr/doc/postgresql-7.0beta1/README.rpm -- there is more documentation\nthere about what I know is broken in this very preliminary RPM, and\nother changes.\n\nRPMs are located at ftp.postgresql.org/pub/bindist/RPM/beta\n\nBinaries have only been generated for RedHat 6.1/Intel -- I will be\nbuilding RedHat 5.2/Intel RPM's sometime later this week.\n\nIf you wish to rebuild from the source RPM, you need a RedHat 6.1 system\nwith C development, C++ development, X development, Perl, Tcl/Tk, and\npython-devel installed.\n\nOn a minor note, there is one additional package -- postgresql-tk, which\ncontains the tk client and pgaccess, which was removed from the\npostgresql-tcl package due to some people running X-less servers that\nwanted to use the tcl client and pltcl.\n\nRegression passes on these binaries under RedHat 6.1.\n\nLet me know of any problems you find by either e-mailing me direct or by\ne-mailing [email protected].\n\nTIA\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 22 Feb 2000 11:07:43 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0beta1-0.2 testing RPMS are now available."
},
{
"msg_contents": "\tSorry about the delayed response, been busy.... :(\n\nOn Tue, 22 Feb 2000, Lamar Owen wrote:\n\n> Testing RPMs for the 7.0beta1 release are now available on\n> ftp.postgresql.org. Please note that these RPMS are BETA -- the\n> packaging is still rough in spots. It would not be a good idea to try\n> to upgrade a production system from the stable 6.5.3-3 RPM set to this\n> set -- please use a development system to test with. The following are\n> known to be things that need fixing that are packaging related:\n> \n> 1.)\tAlpha patches are needed -- Ryan K or Uncle George?\n\n\tArrg... Guess that means me? :) I will try and find the time to\ndownload the 7.0 beta tarball and give it a spin on my Alpha by the end of\nthe week. \n\tI have no clue on what patches are going to be needed, given that\nI have yet to even see what parts of the system they touched, let alone\nhow much those parts have changed since 6.5.x. This will probably prove\ninteresting. I will let you (and the list know) as soon as I have some\nresults. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Mon, 28 Feb 2000 20:00:20 -0600 (CST)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 7.0beta1-0.2 testing RPMS are now available."
}
] |
[
{
"msg_contents": "\n\n\n\n",
"msg_date": "Tue, 22 Feb 2000 12:43:11 -0600",
"msg_from": "Nora Luz Escobar =?iso-8859-1?Q?L=F3pez?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "subcription"
}
] |
[
{
"msg_contents": "When will we release 7.0? I just checked and found that I'm way behind my\nschedule. The parser is in sync, but there are quite some open bugs. So\nhopefully there is either enough time left or someone who would like to\nspend some time on fixing bugs. :-)\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 22 Feb 2000 20:52:54 +0100",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "ECPG / Release"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Michael Meskes wrote:\n\n> When will we release 7.0? I just checked and found that I'm way behind my\n> schedule. The parser is in sync, but there are quite some open bugs. So\n> hopefully there is either enough time left or someone who would like to\n> spend some time on fixing bugs. :-)\n\nApril 1st is what I announced, but I'll be shocked if that actually\nhappens :) You should have loads of time ...\n\nAs far as I'm concerned, stuff like ECPG and JDBC and ODBC are changeable\npretty much up to the release date ... they are generally touched, and\nmodified by only one person ...\n\nOne of the things I'd like to look at for 7.1 is start to split off the\ndistributions ... we're up to a 7meg distribution and growing ... \n\nWe should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\nvery least ... putting 'doc' in a seperate tar file would reduce the size\nby ~3meg:\n\ntotal 3024\n-rw-r--r-- 1 scrappy wheel 2969424 Feb 22 15:27 doc.tar.gz\n\nActually, is there a reason we can't do this now? I can change the 'tar\nbuild' system so that we have split systems that way ... this would at\nleast safe testers from downloading 3meg worth of tar file that most\nlikely won't get touched often ...\n\nI'm going to do this tonight, put it up and see what ppl thing ... if\nnothing else, it makes it easier for ppl to download smaller chunks ...\n\nMarc G. Fournier ICQ#7615664 IRC\nNick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 22 Feb 2000 16:32:32 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> We should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\n> very least ... putting 'doc' in a seperate tar file would reduce the size\n> by ~3meg:\n[snip]\n> I'm going to do this tonight, put it up and see what ppl thing ... if\n> nothing else, it makes it easier for ppl to download smaller chunks ...\n\nKindof like how the RPM distribution is split, but not as fine, right?\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 22 Feb 2000 15:46:31 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "> One of the things I'd like to look at for 7.1 is start to split off the\n> distributions ... we're up to a 7meg distribution and growing ... \n> \n> We should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\n> very least ... putting 'doc' in a seperate tar file would reduce the size\n> by ~3meg:\n> \n> total 3024\n> -rw-r--r-- 1 scrappy wheel 2969424 Feb 22 15:27 doc.tar.gz\n> \n> Actually, is there a reason we can't do this now? I can change the 'tar\n> build' system so that we have split systems that way ... this would at\n> least safe testers from downloading 3meg worth of tar file that most\n> likely won't get touched often ...\n> \n> I'm going to do this tonight, put it up and see what ppl thing ... if\n> nothing else, it makes it easier for ppl to download smaller chunks ...\n\nSeems like a good idea. Only RPM packages would have to know of the\nsplit.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 15:55:29 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > We should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\n> > very least ... putting 'doc' in a seperate tar file would reduce the size\n> > by ~3meg:\n> [snip]\n> > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > nothing else, it makes it easier for ppl to download smaller chunks ...\n> \n> Kindof like how the RPM distribution is split, but not as fine, right?\n\nPretty much ... longer term goal, IMHO, is to make a more compact\ndistribution so that if I want libpq on a clint machine, I don't have to\ndownload the whole backend code too ... \n\nBut, for now, I'm just creating simple .tar.gz files that all have to be\ndownloaded, but, for instance, for those with slow links, they don't have\nto hope all 7meg gets down ... they can download smaller files.\n\nI'm creating them right now, broken down as:\n\ndocs -> pgsql/docs \ntest -> pgsql/src/test\nsupport -> pgsql/src/{interfaces,bin}\nbase -> pgsql (minus the above)\n\nBasically, it makes this:\n\n-rw-r--r-- 1 pgsql wheel 7543517 Feb 22 16:04 postgresql.snapshot.tar.gz\n\nDownload as:\n\n-rw-r--r-- 1 pgsql wheel 2261079 Feb 22 16:06 postgresql.snapshot.base.tar.gz\n-rw-r--r-- 1 pgsql wheel 2973217 Feb 22 16:04 postgresql.snapshot.docs.tar.gz\n-rw-r--r-- 1 pgsql wheel 1318456 Feb 22 16:06 postgresql.snapshot.support.tar.gz\n-rw-r--r-- 1 pgsql wheel 987847 Feb 22 16:05 postgresql.snapshot.test.tar.gz\n\nI've just split the 7.0beta1.tar.gz file up also:\n\n-rw-r--r-- 1 pgsql wheel 7533458 Feb 21 18:34 postgresql-7.0beta1.tar.gz\n\n-rw-r--r-- 1 pgsql wheel 2260487 Feb 22 16:14 postgresql-7.0beta1.base.tar.gz\n-rw-r--r-- 1 pgsql wheel 1310901 Feb 22 16:14 postgresql-7.0beta1.support.tar.gz\n-rw-r--r-- 1 pgsql wheel 987270 Feb 22 16:13 postgresql-7.0beta1.test.tar.gz\n-rw-r--r-- 1 pgsql wheel 2973182 Feb 22 16:13 postgresql-7.0beta1.docs.tar.gz\n\nVince, can you put something on the Web page showing the 'split' files as\nwell, so that ppl know they exist and can download those ones instead?\n\n\n",
"msg_date": "Tue, 22 Feb 2000 17:16:29 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > One of the things I'd like to look at for 7.1 is start to split off the\n> > distributions ... we're up to a 7meg distribution and growing ... \n> > \n> > We should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\n> > very least ... putting 'doc' in a seperate tar file would reduce the size\n> > by ~3meg:\n> > \n> > total 3024\n> > -rw-r--r-- 1 scrappy wheel 2969424 Feb 22 15:27 doc.tar.gz\n> > \n> > Actually, is there a reason we can't do this now? I can change the 'tar\n> > build' system so that we have split systems that way ... this would at\n> > least safe testers from downloading 3meg worth of tar file that most\n> > likely won't get touched often ...\n> > \n> > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > nothing else, it makes it easier for ppl to download smaller chunks ...\n> \n> Seems like a good idea. Only RPM packages would have to know of the\n> split.\n\nHuh? *raised eyebrow*\n\n\n",
"msg_date": "Tue, 22 Feb 2000 17:32:19 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I'm creating them right now, broken down as:\n\n> docs -> pgsql/docs \n> test -> pgsql/src/test\n> support -> pgsql/src/{interfaces,bin}\n> base -> pgsql (minus the above)\n\nOne gripe on this --- the docs are sort-of optional, and the test stuff\nis certainly optional, but the interfaces and bin directories are *not*\noptional. It'd be a good idea to make sure this is noted on the webpage\nor in the FTP directory's README file...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 17:11:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release) "
},
{
"msg_contents": "Keep in mind psql needs the doc/src/sgml files for psql help.\n\n\n> On Tue, 22 Feb 2000, Lamar Owen wrote:\n> \n> > The Hermit Hacker wrote:\n> > > We should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\n> > > very least ... putting 'doc' in a seperate tar file would reduce the size\n> > > by ~3meg:\n> > [snip]\n> > > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > > nothing else, it makes it easier for ppl to download smaller chunks ...\n> > \n> > Kindof like how the RPM distribution is split, but not as fine, right?\n> \n> Pretty much ... longer term goal, IMHO, is to make a more compact\n> distribution so that if I want libpq on a clint machine, I don't have to\n> download the whole backend code too ... \n> \n> But, for now, I'm just creating simple .tar.gz files that all have to be\n> downloaded, but, for instance, for those with slow links, they don't have\n> to hope all 7meg gets down ... they can download smaller files.\n> \n> I'm creating them right now, broken down as:\n> \n> docs -> pgsql/docs \n> test -> pgsql/src/test\n> support -> pgsql/src/{interfaces,bin}\n> base -> pgsql (minus the above)\n> \n> Basically, it makes this:\n> \n> -rw-r--r-- 1 pgsql wheel 7543517 Feb 22 16:04 postgresql.snapshot.tar.gz\n> \n> Download as:\n> \n> -rw-r--r-- 1 pgsql wheel 2261079 Feb 22 16:06 postgresql.snapshot.base.tar.gz\n> -rw-r--r-- 1 pgsql wheel 2973217 Feb 22 16:04 postgresql.snapshot.docs.tar.gz\n> -rw-r--r-- 1 pgsql wheel 1318456 Feb 22 16:06 postgresql.snapshot.support.tar.gz\n> -rw-r--r-- 1 pgsql wheel 987847 Feb 22 16:05 postgresql.snapshot.test.tar.gz\n> \n> I've just split the 7.0beta1.tar.gz file up also:\n> \n> -rw-r--r-- 1 pgsql wheel 7533458 Feb 21 18:34 postgresql-7.0beta1.tar.gz\n> \n> -rw-r--r-- 1 pgsql wheel 2260487 Feb 22 16:14 postgresql-7.0beta1.base.tar.gz\n> -rw-r--r-- 1 pgsql wheel 1310901 Feb 22 16:14 postgresql-7.0beta1.support.tar.gz\n> -rw-r--r-- 1 pgsql wheel 987270 Feb 22 16:13 postgresql-7.0beta1.test.tar.gz\n> -rw-r--r-- 1 pgsql wheel 2973182 Feb 22 16:13 postgresql-7.0beta1.docs.tar.gz\n> \n> Vince, can you put something on the Web page showing the 'split' files as\n> well, so that ppl know they exist and can download those ones instead?\n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 17:19:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "> > > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > > nothing else, it makes it easier for ppl to download smaller chunks ...\n> > \n> > Seems like a good idea. Only RPM packages would have to know of the\n> > split.\n> \n> Huh? *raised eyebrow*\n\nDon't RPM's have to know to download two tarballs?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 17:19:50 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > I'm creating them right now, broken down as:\n> \n> > docs -> pgsql/docs \n> > test -> pgsql/src/test\n> > support -> pgsql/src/{interfaces,bin}\n> > base -> pgsql (minus the above)\n> \n> One gripe on this --- the docs are sort-of optional, and the test stuff\n> is certainly optional, but the interfaces and bin directories are *not*\n> optional. It'd be a good idea to make sure this is noted on the webpage\n> or in the FTP directory's README file...\n\npsql needs the sgml files for help, so none are optional, I think.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 17:20:35 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> One gripe on this --- the docs are sort-of optional, and the test stuff\n>> is certainly optional, but the interfaces and bin directories are *not*\n>> optional. It'd be a good idea to make sure this is noted on the webpage\n>> or in the FTP directory's README file...\n\n> psql needs the sgml files for help, so none are optional, I think.\n\nBut the tarball has a prebuilt psql help file, or should.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 17:38:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release) "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > > > nothing else, it makes it easier for ppl to download smaller chunks ...\n> > >\n> > > Seems like a good idea. Only RPM packages would have to know of the\n> > > split.\n> >\n> > Huh? *raised eyebrow*\n> \n> Don't RPM's have to know to download two tarballs?\n\nOr three, or ten..... In the case of our RPM's, there are eight source\nfiles needed. Not a big deal, though.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 22 Feb 2000 17:59:36 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > I'm creating them right now, broken down as:\n> \n> > docs -> pgsql/docs \n> > test -> pgsql/src/test\n> > support -> pgsql/src/{interfaces,bin}\n> > base -> pgsql (minus the above)\n> \n> One gripe on this --- the docs are sort-of optional, and the test stuff\n> is certainly optional, but the interfaces and bin directories are *not*\n> optional. It'd be a good idea to make sure this is noted on the webpage\n> or in the FTP directory's README file...\n\nAlready done...see README.dist-split :) As I note in that file, all\nchunks have to be downloaded, since I didn't want to differentiate at this\ntime between what is optional and what isn't. The purpose, for v7.0, was\njust to make it 4 smaller tar files then one large one ... for v7.1, I'd\nlike to work on cleaning up the 'optional' stuff ...\n\n\n",
"msg_date": "Tue, 22 Feb 2000 19:42:06 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release) "
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > > > I'm going to do this tonight, put it up and see what ppl thing ... if\n> > > > nothing else, it makes it easier for ppl to download smaller chunks ...\n> > > \n> > > Seems like a good idea. Only RPM packages would have to know of the\n> > > split.\n> > \n> > Huh? *raised eyebrow*\n> \n> Don't RPM's have to know to download two tarballs?\n\nthe RPMs are totally seperate tarballs ... unrelated to this ... about the\nonly thing that would/could be affected is the FreeBSD ports collection,\nbut I'm still making the 'all-inclusive tar ball', so if ppl have a high\nspeed connection and want to download it all at once, they can ... just\nincreasing the options \n\n",
"msg_date": "Tue, 22 Feb 2000 19:43:28 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "On Tue, 22 Feb 2000, Bruce Momjian wrote:\n\n> > The Hermit Hacker <[email protected]> writes:\n> > > I'm creating them right now, broken down as:\n> > \n> > > docs -> pgsql/docs \n> > > test -> pgsql/src/test\n> > > support -> pgsql/src/{interfaces,bin}\n> > > base -> pgsql (minus the above)\n> > \n> > One gripe on this --- the docs are sort-of optional, and the test stuff\n> > is certainly optional, but the interfaces and bin directories are *not*\n> > optional. It'd be a good idea to make sure this is noted on the webpage\n> > or in the FTP directory's README file...\n> \n> psql needs the sgml files for help, so none are optional, I think.\n\nTechnically, there should be a way of buildling a release distribution\nthat can build psql without requiring anything but libpq being installed\n... its stuff I'm currently looking into ... but not for v7.0 ...\n\n",
"msg_date": "Tue, 22 Feb 2000 19:45:05 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "\nOn 22-Feb-00 The Hermit Hacker wrote:\n> On Tue, 22 Feb 2000, Bruce Momjian wrote:\n> \n>> > > > I'm going to do this tonight, put it up and see what ppl thing ... if\n>> > > > nothing else, it makes it easier for ppl to download smaller chunks ...\n>> > > \n>> > > Seems like a good idea. Only RPM packages would have to know of the\n>> > > split.\n>> > \n>> > Huh? *raised eyebrow*\n>> \n>> Don't RPM's have to know to download two tarballs?\n> \n> the RPMs are totally seperate tarballs ... unrelated to this ... about the\n> only thing that would/could be affected is the FreeBSD ports collection,\n\nSpeaking of which, I sent a note back to Andreas about that.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 22 Feb 2000 19:15:09 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "On 2000-02-22, The Hermit Hacker mentioned:\n\n> Pretty much ... longer term goal, IMHO, is to make a more compact\n> distribution so that if I want libpq on a clint machine, I don't have to\n> download the whole backend code too ... \n\nUnfortunately there are currently some severe bogosities in the build\nprocess that will prevent this. Certain subdirectories reach half way\nacross the source tree to get the stuff they need. I've thrown several\nhints around in this direction, I would like to give the build process a\nserious massage for the next release. This item would be considered.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n",
"msg_date": "Wed, 23 Feb 2000 02:20:06 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
},
{
"msg_contents": "Ooh, good point. Never mind.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> One gripe on this --- the docs are sort-of optional, and the test stuff\n> >> is certainly optional, but the interfaces and bin directories are *not*\n> >> optional. It'd be a good idea to make sure this is noted on the webpage\n> >> or in the FTP directory's README file...\n> \n> > psql needs the sgml files for help, so none are optional, I think.\n> \n> But the tarball has a prebuilt psql help file, or should.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 20:21:28 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
}
] |
[
{
"msg_contents": "Hi,\n\nsorry if I miss something - there are so many changes in current\ndevelopment and I didn't track them thoroughly in mailing list.\n\nI've tried to port my db scheme which works with 6.5 to 7.0 and\ngot little problem:\n\ncreate view www_auth as select a.account as user_name, a.password, b.nick as \ngroup_name from users a, resources b, privilege_user_map c\n where a.auth_id = c.auth_id and b.res_id = c.res_id and \n (a.account_valid_until is null or a.account_valid_until > datetime('now'::text))\n and c.perm_id = 1;\n\nERROR: No such function 'datetime' with the specified attributes\n\nI had to use datetime('now'::text) as a workaround of bug in 6.5.3.\n\nI tried just datetime('now') but still have the same problem.\n\nDoes the above view will works with now() ? \n\n\nAnother problem:\n\ncreate table tt (i int4, a datetime default 'now');\ndoesn't works and I still need \ncreate table tt (i int4, a datetime default now());\n\nWe have discussed this problem some time ago.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 23 Feb 2000 00:24:30 +0300 (GMT)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "'now' in 7.0"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> create view www_auth as select a.account as user_name, a.password, b.nick as \n> group_name from users a, resources b, privilege_user_map c\n> where a.auth_id = c.auth_id and b.res_id = c.res_id and \n> (a.account_valid_until is null or a.account_valid_until > datetime('now'::text))\n> and c.perm_id = 1;\n\n> ERROR: No such function 'datetime' with the specified attributes\n\nApparently the datetime() function got renamed to timestamp() in the\nrecent consolidation of date/time types. I'd actually recommend that\nyou write CURRENT_TIMESTAMP, which is the SQL-approved notation...\n\n> Does the above view will works with now() ? \n\nThat should work too.\n\n> Another problem:\n> create table tt (i int4, a datetime default 'now');\n> doesn't works and I still need \n> create table tt (i int4, a datetime default now());\n\n? Works for me:\n\nregression=# create table tt (i int4, a datetime default 'now');\nCREATE\nregression=# insert into tt values(1);\nINSERT 653163 1\nregression=# insert into tt values(1);\nINSERT 653164 1\nregression=# select * from tt;\n i | a\n---+------------------------\n 1 | 2000-02-22 17:15:16-05\n 1 | 2000-02-22 17:15:18-05\n(2 rows)\n\nalthough here also I think now() or CURRENT_TIMESTAMP would be safer\ncoding.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Feb 2000 17:28:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 'now' in 7.0 "
},
{
"msg_contents": "> > ERROR: No such function 'datetime' with the specified attributes\n> Apparently the datetime() function got renamed to timestamp() in the\n> recent consolidation of date/time types. I'd actually recommend that\n> you write CURRENT_TIMESTAMP, which is the SQL-approved notation...\n> although here also I think now() or CURRENT_TIMESTAMP would be safer\n> coding.\n\nRight. We stayed away from recommending anything to do with\n\"timestamp\" in the past because it was such a brain-damaged\nimplementation. \n\nSorry for the porting effort; I could imagine someone working on a\n\"datetime compatibility package\" which would define some of these\nolder functions. It would not need any compiled code, just a set of\nCREATE FUNCTION definitions to hook up the new code with the old\nnames, something possible with Tom Lane's decoupling of entry points\nfrom names...\n\nIf you are interested in doing this Oleg I'm sure we could slip it\ninto the beta tarball...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 28 Feb 2000 15:26:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 'now' in 7.0"
}
] |
[
{
"msg_contents": "> Dear Mr Momjian\n> \n> Just a quick suggestion for an added feature: \"DROP COLUMN columnname\n> FROM table...\" and \"ALTER COLUMN columname FROM table...\" queries would\n> spare my hair when I make mistakes in table creation.\n\nWe have the DROP, but will not appear in 7.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 22 Feb 2000 20:23:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature Request"
},
{
"msg_contents": "On Tue, Feb 22, 2000 at 08:23:17PM -0500, Bruce Momjian wrote:\n> > Dear Mr Momjian\n> > \n> > Just a quick suggestion for an added feature: \"DROP COLUMN columnname\n> > FROM table...\" and \"ALTER COLUMN columname FROM table...\" queries would\n> > spare my hair when I make mistakes in table creation.\n> \n> We have the DROP, but will not appear in 7.0.\n\nOne last little note on the whole DROP COLUMN discussion:\n\nA snip from Phil Greenspun's photo.net site:\n\n\"Adding a column to a relational database table seldom breaks\nqueries. Until Oracle 8.1.5, you weren't able to drop a column.\"\n\nMy guess would be that Oracle found out internally what we've been\ndiscussing here: doing DROP COLUMN right is _hard_.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 23 Feb 2000 16:40:14 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Feature Request"
},
{
"msg_contents": "At 04:40 PM 2/23/00 -0600, Ross J. Reedstrom wrote:\n\n>\"Adding a column to a relational database table seldom breaks\n>queries. Until Oracle 8.1.5, you weren't able to drop a column.\"\n>\n>My guess would be that Oracle found out internally what we've been\n>discussing here: doing DROP COLUMN right is _hard_.\n\nThey ended up providing both kinds of drop that have been discussed\nhere, i.e. a slow one that actually mucks through the table getting\nrid of physical data, and a quick one that simply marks the column\nas being invisible resulting in it being ignored in the future.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 15:03:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Feature Request"
}
] |
[
{
"msg_contents": "Hi there,\n\nI've just had a look at the 7.0beta and I've seen your enhancements \nabout LIMIT optimization.\nDid you read by chance my previous message intitled \n\"Generalized Top Queries on PostgreSQL\"?\nWhen I wrote it I hadn't read the thread \nintitled \"Solution for LIMIT cost estimation\" yet.\n\nWhat you did looks pretty similar to part of our extension\n(cost model and pruning rules). The main differences are:\n\n- the FOR EACH generalization.\n\n- You cannot select the top N rows according to criterion A ordering\n the results with a different criterion B.\n\n- If you ask for the best 10 rows, from a relation including \n 100000 rows, you have to do a traditional sort on 100000 rows and\n then retain only the first 10, doing more comparisons than requested.\n\n- You can choose a \"fast-start\" plan (i.e., basically, \n a pipelined plan), but you cannot performe an \"early-stop\" of \n the stream when you have a \"slow-start\" plan (e.g. involving sorts \n or hash tables). We noticed that this kind of plan often \n outperforms the first one.\n\nSo, we are looking forward to see how the new LIMIT optimization works\n(we will do some tests as soon as possible). Have you noticed\nrelevant improvements? \n\nActually, we should say we can't figure out the reason for\nmanaging the LIMIT clause in a so complicated way, not providing \na node in the plan as any other operator. \nIn our opinion, the choice to provide a separated process of the \nLIMIT clause has two problems:\n1. We find it more complicated and not so natural.\n2. It is an obstacle to some optimizations and to some functionalities\n (how to use it in subselects or views?)\n\nBest regards\n\nR. Cornacchia ([email protected]) Computer Science, University of\nBologna\n\nA. Ghidini ([email protected]) Computer Science, University of Bologna \n\nDr. Paolo Ciaccia ([email protected]) DEIS CSITE-CNR, University of\nBologna\n\n===========================================================\n\nVIRGILIO MAIL - Il tuo indirizzo E-mail gratis e per sempre\nhttp://mail.virgilio.it/\n\n\nVIRGILIO - La guida italiana a Internet\nhttp://www.virgilio.it/\n",
"msg_date": "Wed, 23 Feb 2000 00:40:43 -0500",
"msg_from": "\"Roberto Cornacchia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "about 7.0 LIMIT optimization"
},
{
"msg_contents": "> Did you read by chance my previous message intitled \n> \"Generalized Top Queries on PostgreSQL\"?\n\nI vaguely recall it, but forget the details...\n\n> - You cannot select the top N rows according to criterion A ordering\n> the results with a different criterion B.\n\nTrue, but I don't see how to do that with one indexscan (for that\nmatter, I don't even see how to express it in the SQL subset that\nwe support...)\n\n> - If you ask for the best 10 rows, from a relation including \n> 100000 rows, you have to do a traditional sort on 100000 rows and\n> then retain only the first 10, doing more comparisons than requested.\n\nNot if there's an index that implements the ordering --- and if there\nis not, I don't see how to avoid the sort anyway.\n\n> - You can choose a \"fast-start\" plan (i.e., basically, \n> a pipelined plan), but you cannot performe an \"early-stop\" of \n> the stream when you have a \"slow-start\" plan (e.g. involving sorts \n> or hash tables).\n\nWhy not? The executor *will* stop when it has as many output rows as\nthe LIMIT demands.\n\n> We noticed that this kind of plan often outperforms the first one.\n\nI'd be the first to admit that the cost model needs some fine-tuning\nstill. It's just a conceptual structure at this point.\n\n> Actually, we should say we can't figure out the reason for\n> managing the LIMIT clause in a so complicated way, not providing \n> a node in the plan as any other operator. \n\nWe will probably end up doing it like that sooner or later, in order to\nallow attaching LIMIT to sub-selects. I don't take any credit or blame\nfor the execution-time implementation of LIMIT; I just worked with what\nI found...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 01:03:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about 7.0 LIMIT optimization "
}
] |
[
{
"msg_contents": "Hi,\n\nI posted this a few days ago on the pgsql-sql list, but got no response. Is\nthere any way to enable loading tclLDAP from within a pltcl function? I\nwould like to maintain an openldap directory using an update/insert trigger.\n\nI modified pltcl.c to load a non-safe interpreter and recompiled. This\nallowed me to use the \"load\" command, but the tclLDAP library still would\nnot load. The error message is:\n\nERROR: pltcl: couldn't load file \"/usr/lib/tclLDAP/Ldap.so\":\n/usr/lib/tclLDAP/Ldap.so: undefined symbol: Tcl_PkgProvide (#1)\n\nI am not even close to being fluent in c. I would greatly appreciate any\nsuggestions.\n\nBTW, perhaps in some future release you might consider allowing a non-safe\ntcl interpreter (or at least some kind of controlled external library\nsupport) as an option.\n\nThanks,\n\nJoe Conway\n\n\n-----Original Message-----\nFrom: Joe Conway <[email protected]>\nTo: [email protected] <[email protected]>\nDate: Sunday, February 20, 2000 11:55 AM\nSubject: pltcl and LDAP\n\n\n>I'm working on a project right now which involves updating an LDAP\ndirectory\n>from a PostgreSQL database. The database includes a table called\n>employee_data. I would like to use the tclLDAP library from within a pltcl\n>function and create a trigger on the employee_data table to update the LDAP\n>directory every time something changes.\n>\n>I have been able to get the tclLDAP functions to work properly from with\n>pgtclsh, but not from within pltcl. The documentation states that pltcl\nuses\n>a safe interpreter which does not allow a tcl load command.\n>\n>Has anyone else tried to do this, i.e. synch up a PostgreSQL database with\na\n>LDAP directory? If so, how? I have considered just writing a pgtclsh script\n>and running it from cron, but I would prefer real time updates via a\ntrigger\n>if possible.\n>\n>Any help or suggestions would be much appreciated.\n>\n>Thanks,\n>\n>Joe\n>\n\n",
"msg_date": "Tue, 22 Feb 2000 21:57:00 -0800",
"msg_from": "\"Joe Conway\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pltcl and LDAP"
},
{
"msg_contents": "Joe Conway wrote:\n\n> Hi,\n>\n> I posted this a few days ago on the pgsql-sql list, but got no response. Is\n> there any way to enable loading tclLDAP from within a pltcl function? I\n> would like to maintain an openldap directory using an update/insert trigger.\n>\n> I modified pltcl.c to load a non-safe interpreter and recompiled. This\n> allowed me to use the \"load\" command, but the tclLDAP library still would\n> not load. The error message is:\n>\n> ERROR: pltcl: couldn't load file \"/usr/lib/tclLDAP/Ldap.so\":\n> /usr/lib/tclLDAP/Ldap.so: undefined symbol: Tcl_PkgProvide (#1)\n\n Um - and that's the only unresolved one?\n\n Which version of Tcl is used from PL/Tcl, and which version\n is the Ldap.so linked against?\n\n> I am not even close to being fluent in c. I would greatly appreciate any\n> suggestions.\n>\n> BTW, perhaps in some future release you might consider allowing a non-safe\n> tcl interpreter (or at least some kind of controlled external library\n> support) as an option.\n\n Kinda that is on my personal TODO/wishlist. Splitting PL/Tcl\n into two separate interpreters internally, identified by\n different language names. The unsafe language, with full\n access to OS under the postgres userID, would be an untrusted\n language, so creation of functions is restricted to DB\n superusers.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 23 Feb 2000 11:22:52 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pltcl and LDAP"
},
{
"msg_contents": "I wrote:\n\n> Joe Conway wrote:\n>\n> > I modified pltcl.c to load a non-safe interpreter and recompiled. This\n> > allowed me to use the \"load\" command, but the tclLDAP library still would\n> > not load. The error message is:\n> >\n> > ERROR: pltcl: couldn't load file \"/usr/lib/tclLDAP/Ldap.so\":\n> > /usr/lib/tclLDAP/Ldap.so: undefined symbol: Tcl_PkgProvide (#1)\n>\n> Um - and that's the only unresolved one?\n>\n> Which version of Tcl is used from PL/Tcl, and which version\n> is the Ldap.so linked against?\n\n I've checked by using a normal (unsafe) interpreter like you.\n And I had no problems loading a shared extension that\n definitely calls Tcl_PkgProvide().\n\n But this reminds me to some similar dynamic loading problems\n Bruce had once with PL/pgSQL on FreeBSD with global\n variables.\n\n So what's your platform, compiler, Tcl-version?\n\n I'm using Linux 2.2.x, glibc-2, gcc 2.8.1, Tcl 8.0 here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 23 Feb 2000 16:34:29 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pltcl and LDAP"
}
] |
[
{
"msg_contents": "\n> AFAIK, MS Access have no transactions inside it,\n> Informix (at least old versions I worked with) always \n> perform create,drop, alter object outside transaction \n> but IMHO it's not right behavior.\n\nMS Access has transactions and Informix (Version 5.00 - 9.20) performs \ncreate, drop, alter inside the transaction, same as Oracle and DB2.\n\n> I believe postgres's behavior more meaningful, \n> but IMHO, this example is quite far from real life.\n\nI am pretty sure that the behavior of the others\nis the standard.\n\nWhat PostgreSQL currently also lacks, to make this really useful\nis ANSI SQL SQLSTATE (most others also have an int sqlcode), \nso you can decide wether this certain error can be ignored or fixed \ninside this transaction. \nThe string parsing we can do is far from optimal. \n\nAndreas\n",
"msg_date": "Wed, 23 Feb 2000 09:26:32 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n> \n> On 23-Feb-2000 Zeugswetter Andreas SB wrote:\n> >\n> >> AFAIK, MS Access have no transactions inside it,\n> >> Informix (at least old versions I worked with) always\n> >> perform create,drop, alter object outside transaction\n> >> but IMHO it's not right behavior.\n> >\n> > MS Access has transactions and Informix (Version 5.00 - 9.20) performs\n> > create, drop, alter inside the transaction, same as Oracle and DB2.\n ^^^^^^\n> \n> OK. May be I miss something.\n\nI don't think so. Not with respect to Oracle. Andreas knows that\nOracle implicitly commits your running transaction -- and starts\na new one whenever a DDL statement is encountered. A large\ndiscussion about this arose about 4 months ago...I can't speak\nfor DB2.\n\n> \n> --\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n> ************\n",
"msg_date": "Wed, 23 Feb 2000 05:53:22 -0500",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\nOn 23-Feb-2000 Zeugswetter Andreas SB wrote:\n> \n>> AFAIK, MS Access have no transactions inside it,\n>> Informix (at least old versions I worked with) always \n>> perform create,drop, alter object outside transaction \n>> but IMHO it's not right behavior.\n> \n> MS Access has transactions and Informix (Version 5.00 - 9.20) performs \n> create, drop, alter inside the transaction, same as Oracle and DB2.\n\nOK. May be I miss something.\n\n-- \nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Wed, 23 Feb 2000 17:53:13 +0300 (MSK)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: [HACKERS] TRANSACTIONS"
}
] |
[
{
"msg_contents": "\n> Jose Soares <[email protected]> writes:\n> > -------------------------------------------------------\n> > Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> > -------------------------------------------------------\n> > connect hygea.gdb;\n> > create table temp(a int);\n> > insert into temp values (1);\n> > insert into temp values (1000000000000000000000000000000000);\n> > commit;\n> > select * from temp;\n> \n> > arithmetic exception, numeric overflow, or string truncation\n> \n> > A\n> > ===========\n> > 1\n> \n> > I would like to know what the Standard says and who is in the rigth path\n> > PostgreSQL or the others, considering the two examples reported below.\n> \n> I think those other guys are unquestionably failing to \n> conform to SQL92.\n> 6.10 general rule 3.a says\n\nAll others also throw an error for this statement, and thus conform.\nAs you can see from the select only the first row is inserted.\nI think the numeric is only an example of an error, it could also be \nany other error, like \"duplicate key\" or the like.\n\n> ......\n> \n> and 3.3.4.1 says\n> \n> The phrase \"an exception condition is raised:\", followed by the\n> name of a condition, is used in General Rules and elsewhere to\n> indicate that the execution of a statement is unsuccessful, ap-\n> plication of General Rules, other than those of Subclause 12.3,\n> \"<procedure>\", and Subclause 20.1, \"<direct SQL statement>\", may\n> be terminated, diagnostic information is to be made available,\n> and execution of the statement is to have no effect on SQL-data\nor\n\nNote here, that they say \"the statement\", which does not say anything about \nother statements in the same transaction.\n\n> schemas. The effect on <target specification>s and SQL descriptor\n> areas of an SQL-statement that terminates with an exception\ncondi-\n> tion, unless explicitly defined by this International Standard,\nis\n> implementation-dependent.\n> \n> I see no way that allowing the transaction to commit after an overflow\n> can be called consistent with the spec.\n\nOf course it can not commit this single statement that was in error.\nAll he wants is to commit all other statements, before and after the\nerror statement inside this same transaction.\n\nAndreas\n",
"msg_date": "Wed, 23 Feb 2000 09:52:59 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Jose Soares <[email protected]> writes:\n> > > -------------------------------------------------------\n> > > Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> > > -------------------------------------------------------\n> > > connect hygea.gdb;\n> > > create table temp(a int);\n> > > insert into temp values (1);\n> > > insert into temp values (1000000000000000000000000000000000);\n> > > commit;\n> > > select * from temp;\n> >\n> > > arithmetic exception, numeric overflow, or string truncation\n> >\n> > > A\n> > > ===========\n> > > 1\n> >\n> > > I would like to know what the Standard says and who is in the rigth path\n> > > PostgreSQL or the others, considering the two examples reported below.\n> >\n> > I think those other guys are unquestionably failing to\n> > conform to SQL92.\n> > 6.10 general rule 3.a says\n> \n> All others also throw an error for this statement, and thus conform.\n> As you can see from the select only the first row is inserted.\n> I think the numeric is only an example of an error, it could also be\n> any other error, like \"duplicate key\" or the like.\n> \n> > ......\n> >\n> > and 3.3.4.1 says\n> >\n> > The phrase \"an exception condition is raised:\", followed by the\n> > name of a condition, is used in General Rules and elsewhere to\n> > indicate that the execution of a statement is unsuccessful, ap-\n> > plication of General Rules, other than those of Subclause 12.3,\n> > \"<procedure>\", and Subclause 20.1, \"<direct SQL statement>\", may\n> > be terminated, diagnostic information is to be made available,\n> > and execution of the statement is to have no effect on SQL-data\n> or\n> \n> Note here, that they say \"the statement\", which does not say anything about\n> other statements in the same transaction.\n> \n> > schemas. The effect on <target specification>s and SQL descriptor\n> > areas of an SQL-statement that terminates with an exception\n> condi-\n> > tion, unless explicitly defined by this International Standard,\n> is\n> > implementation-dependent.\n> >\n> > I see no way that allowing the transaction to commit after an overflow\n> > can be called consistent with the spec.\n> \n> Of course it can not commit this single statement that was in error.\n> All he wants is to commit all other statements, before and after the\n> error statement inside this same transaction.\n> \n\nIsn't the intention of a transaction that it is atomic, i.e. either all\nstatements pass or none of them? (see section 5.4 in the standard).\n\nWim\n",
"msg_date": "Wed, 23 Feb 2000 12:09:14 +0100",
"msg_from": "Wim Ceulemans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "\nWim Ceulemans wrote:\n\n> Zeugswetter Andreas SB wrote:\n> >\n> > > Jose Soares <[email protected]> writes:\n> > > > -------------------------------------------------------\n> > > > Interbase, Oracle,Informix,Solid,Ms-Access,DB2:\n> > > > -------------------------------------------------------\n> > > > connect hygea.gdb;\n> > > > create table temp(a int);\n> > > > insert into temp values (1);\n> > > > insert into temp values (1000000000000000000000000000000000);\n> > > > commit;\n> > > > select * from temp;\n> > >\n> > > > arithmetic exception, numeric overflow, or string truncation\n> > >\n> > > > A\n> > > > ===========\n> > > > 1\n> > >\n> > > > I would like to know what the Standard says and who is in the rigth path\n> > > > PostgreSQL or the others, considering the two examples reported below.\n> > >\n> > > I think those other guys are unquestionably failing to\n> > > conform to SQL92.\n> > > 6.10 general rule 3.a says\n> >\n> > All others also throw an error for this statement, and thus conform.\n> > As you can see from the select only the first row is inserted.\n> > I think the numeric is only an example of an error, it could also be\n> > any other error, like \"duplicate key\" or the like.\n> >\n> > > ......\n> > >\n> > > and 3.3.4.1 says\n> > >\n> > > The phrase \"an exception condition is raised:\", followed by the\n> > > name of a condition, is used in General Rules and elsewhere to\n> > > indicate that the execution of a statement is unsuccessful, ap-\n> > > plication of General Rules, other than those of Subclause 12.3,\n> > > \"<procedure>\", and Subclause 20.1, \"<direct SQL statement>\", may\n> > > be terminated, diagnostic information is to be made available,\n> > > and execution of the statement is to have no effect on SQL-data\n> > or\n> >\n> > Note here, that they say \"the statement\", which does not say anything about\n> > other statements in the same transaction.\n> >\n> > > schemas. The effect on <target specification>s and SQL descriptor\n> > > areas of an SQL-statement that terminates with an exception\n> > condi-\n> > > tion, unless explicitly defined by this International Standard,\n> > is\n> > > implementation-dependent.\n> > >\n> > > I see no way that allowing the transaction to commit after an overflow\n> > > can be called consistent with the spec.\n> >\n> > Of course it can not commit this single statement that was in error.\n> > All he wants is to commit all other statements, before and after the\n> > error statement inside this same transaction.\n> >\n>\n> Isn't the intention of a transaction that it is atomic, i.e. either all\n> statements pass or none of them? (see section 5.4 in the standard).\n>\n>\n\nThere's another problem, in the following example the transaction il failed but\nthe transation it is not automatically rolledback, it remains instead in an \"ABORT\nSTATE\" waitting for an explicit ROLLBACK or COMMIT.\nIf I'm using transactions from a client program I don't know what's happened to\nthe back end.\n\n\nfirst example:\n^^^^^^^^^^\nprova=> begin work;\nBEGIN\nprova=> create table tmp(a int);\nERROR: Relation 'tmp' already exists\nprova=> drop table tmp;\nNOTICE: (transaction aborted): all queries ignored until end of transaction block\n\n*ABORT STATE*\nprova=> commit;\nEND\n-----------------------------------------------------------------------\nWhat is happening ?\nWhy PostgreSQL doesn't make an implicit ROLLBACK instead of waitting for a\nCOMMIT/ROLLBACK ?\nWhy PostgreSQL allows a COMMIT in this case ?\n\n\nsecond example:\n^^^^^^^^\n\nprova=> begin;\nBEGIN\nprova=> create table tmp(a int);\nCREATE\nprova=> create table tmp(a int);\nERROR: Relation 'tmp' already exists\nprova=> select * from tmp;\nERROR: mdopen: couldn't open tmp: No such file or directory\nprova=> commit;\nEND\nprova=> select * from tmp;\nERROR: tmp: Table does not exist.\n-----------------------------------------------------------------------\nWhat is happening ?\nApparently the transaction was successful but the TMP table doesn't exist after a\nsuccessful COMMIT.\nWhy PostgreSQL allows a COMMIT in this case ?\nWhy in this case PostgreSQL doesn't show the:\n NOTICE: (transaction aborted): all queries ignored until end of\ntransaction block\n *ABORT STATE*\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Thu, 24 Feb 2000 09:50:11 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "On Thu, 24 Feb 2000, Jose Soares wrote:\n\n> NOTICE: (transaction aborted): all queries ignored until end of transaction block\n> \n> *ABORT STATE*\n\n> Why PostgreSQL doesn't make an implicit ROLLBACK instead of waitting for a\n> COMMIT/ROLLBACK ?\n\nThe PostgreSQL transaction paradigm seems to be that if you explicitly\nstart a transaction, you get to explicitly end it. This is of course at\nodds with SQL, but it seems internally consistent to me. I hope that one\nof these days we can offer the other behaviour as well.\n\n> Why PostgreSQL allows a COMMIT in this case ?\n\nGood question. I assume it doesn't actually commit though, does it? I\nthink a CHECK_IF_ABORTED (sp?) before calling the commit utility routine\nwould be appropriate. Anyone?\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 17:18:23 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> On Thu, 24 Feb 2000, Jose Soares wrote:\n>\n> > NOTICE: (transaction aborted): all queries ignored until end of transaction block\n> >\n> > *ABORT STATE*\n>\n> > Why PostgreSQL doesn't make an implicit ROLLBACK instead of waitting for a\n> > COMMIT/ROLLBACK ?\n>\n> The PostgreSQL transaction paradigm seems to be that if you explicitly\n> start a transaction, you get to explicitly end it. This is of course at\n> odds with SQL, but it seems internally consistent to me. I hope that one\n> of these days we can offer the other behaviour as well.\n>\n> > Why PostgreSQL allows a COMMIT in this case ?\n>\n> Good question. I assume it doesn't actually commit though, does it? I\n> think a CHECK_IF_ABORTED (sp?) before calling the commit utility routine\n> would be appropriate. Anyone?\n>\n\nSeems that PostgreSQL has a basically difference from other databases, it has two\noperation modes\n\"transaction mode\" and \"non-transaction mode\".\nIf you want initialize a transaction in PostgreSQL you must declare it by using the\nBEGIN WORK\nstatement and an END/ABORT/ROLLBACK/COMMIT statement to terminate the transaction and\nswitch from \"transaction mode\" to \"non-transaction mode\".\nThe SQL92 doesn't have such statement like BEGIN WORK because when you initialize a\nconnection to a database you are all the time in transaction mode.\nShould it be the real problem with transactions ?\n\n>\n> --\n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n>\n> ************\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n\n \nPeter Eisentraut wrote:\nOn Thu, 24 Feb 2000, Jose Soares wrote:\n> NOTICE: (transaction aborted): all queries ignored until end\nof transaction block\n>\n> *ABORT STATE*\n> Why PostgreSQL doesn't make an implicit ROLLBACK instead of waitting\nfor a\n> COMMIT/ROLLBACK ?\nThe PostgreSQL transaction paradigm seems to be that if you explicitly\nstart a transaction, you get to explicitly end it. This is of course\nat\nodds with SQL, but it seems internally consistent to me. I hope that\none\nof these days we can offer the other behaviour as well.\n> Why PostgreSQL allows a COMMIT in this case ?\nGood question. I assume it doesn't actually commit though, does it?\nI\nthink a CHECK_IF_ABORTED (sp?) before calling the commit utility routine\nwould be appropriate. Anyone?\n \nSeems that PostgreSQL has a basically difference from other databases,\nit has two operation modes\n\"transaction mode\" and \"non-transaction mode\".\nIf you want initialize a transaction in PostgreSQL you must declare\nit by using the BEGIN WORK\nstatement and an END/ABORT/ROLLBACK/COMMIT statement to terminate the\ntransaction and switch from \"transaction mode\" to \"non-transaction\nmode\".\nThe SQL92 doesn't have such statement like BEGIN WORK because when\nyou initialize a connection to a database you are all the time in transaction\nmode.\nShould it be the real problem with transactions ?\n \n--\nPeter Eisentraut \nSernanders vaeg 10:115\[email protected] \n75262 Uppsala\nhttp://yi.org/peter-e/ \nSweden\n************\n--\nJose' Soares\nBologna, Italy \[email protected]",
"msg_date": "Mon, 28 Feb 2000 09:44:57 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] AW: [HACKERS] TRANSACTIONS"
}
] |
[
{
"msg_contents": "\n> >I see no way that allowing the transaction to commit after an overflow\n> >can be called consistent with the spec.\n> \n> You are absolutely right. The whole point is that either a) everything\n> commits or b) nothing commits.\n> Having some kinds of exceptions allow a partial commit while other\n> exceptions rollback the transaction seems like a very error-prone\n> programming environment to me.\n\nThere is no distinction between exceptions.\nA statement that throws an error is not performed (including all\nits triggered events) period.\nThere are sqlstates, that are only warnings, in which case the statement \nis performed.\n\nIn this sense a commit is not partial. The commit should commit\nall statements that were not in error. \nAll other DB's behave in this way.\n\nAndreas\n",
"msg_date": "Wed, 23 Feb 2000 10:06:46 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "Yes Andreas this is the point, for a while I felt like \"Don Quijote de la\nMancha\".\nI don't understand well what Standard says about this subject\nbut I think the PostgreSQL transactions is only for perfect people, it is\nabsolutely\nunuseful because PostgreSQL can't distinguish between a fatal error and a\nwarning.\n\n\nZeugswetter Andreas SB wrote:\n\n> > >I see no way that allowing the transaction to commit after an overflow\n> > >can be called consistent with the spec.\n> >\n> > You are absolutely right. The whole point is that either a) everything\n> > commits or b) nothing commits.\n> > Having some kinds of exceptions allow a partial commit while other\n> > exceptions rollback the transaction seems like a very error-prone\n> > programming environment to me.\n>\n> There is no distinction between exceptions.\n> A statement that throws an error is not performed (including all\n> its triggered events) period.\n> There are sqlstates, that are only warnings, in which case the statement\n> is performed.\n>\n> In this sense a commit is not partial. The commit should commit\n> all statements that were not in error.\n> All other DB's behave in this way.\n>\n> Andreas\n>\n> ************\n\n--\nJose' Soares\nBologna, Italy [email protected]\n\n\n",
"msg_date": "Wed, 23 Feb 2000 15:39:17 +0100",
"msg_from": "Jose Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] TRANSACTIONS"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> You are absolutely right. The whole point is that either a) everything\n>> commits or b) nothing commits.\n>> Having some kinds of exceptions allow a partial commit while other\n>> exceptions rollback the transaction seems like a very error-prone\n>> programming environment to me.\n\n> In this sense a commit is not partial. The commit should commit\n> all statements that were not in error. \n\nThat interpretation eliminates an absolutely essential capability\n(all-or-none behavior) in favor of what strikes me as a very minor\nprogramming shortcut.\n\n> All other DB's behave in this way.\n\nI find this hard to believe, and even harder to believe that it's\nmandated by the standard. What you're essentially claiming is that\neveryone but us has nested transactions (which'd be the only way to\nroll back a single failed statement inside a transaction) and that\nSQL92 requires nested transactions --- yet it never uses the phrase nor\nmakes the obvious step to allowing user-specified nested transactions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 10:54:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n>\n> Zeugswetter Andreas SB <[email protected]> writes:\n> >> You are absolutely right. The whole point is that either a) everything\n> >> commits or b) nothing commits.\n> >> Having some kinds of exceptions allow a partial commit while other\n> >> exceptions rollback the transaction seems like a very error-prone\n> >> programming environment to me.\n>\n> > In this sense a commit is not partial. The commit should commit\n> > all statements that were not in error.\n>\n> That interpretation eliminates an absolutely essential capability\n> (all-or-none behavior) in favor of what strikes me as a very minor\n> programming shortcut.\n>\n> > All other DB's behave in this way.\n>\n> I find this hard to believe,\n\nAt least Oracle does so. AFAIK,transaction cancel\ncould be avoided except FATAL error cases using\nembedded SQL. Dupicate index error is the typical\none.\n\nVadim has already planned to implement savepoint.\nOf cource implicit per statement rollback is one of\nthe case. I have thought it had already been a\nconsensus.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 24 Feb 2000 02:03:29 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: AW: [HACKERS] TRANSACTIONS "
}
] |
[
{
"msg_contents": "As usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Tuesday, February 22, 2000 8:33 PM\nTo: Michael Meskes\nCc: PostgreSQL Hacker\nSubject: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)\n\n\nOn Tue, 22 Feb 2000, Michael Meskes wrote:\n\n> When will we release 7.0? I just checked and found that I'm way behind\nmy\n> schedule. The parser is in sync, but there are quite some open bugs.\nSo\n> hopefully there is either enough time left or someone who would like\nto\n> spend some time on fixing bugs. :-)\n\nApril 1st is what I announced, but I'll be shocked if that actually\nhappens :) You should have loads of time ...\n\nAs far as I'm concerned, stuff like ECPG and JDBC and ODBC are\nchangeable\npretty much up to the release date ... they are generally touched, and\nmodified by only one person ...\n\nPM: To be honest, I've been doing it this way at least since 6.5 :-)\nActually, I think that as long as it doesn't change the core (ie: JDBC\ndoesn't use any code outside of the src/interfaces/jdbc directory) then\nit doesn't hurt. That doesn't mean I don't try to meet the beta deadline\nhowever :-)\n\nOne of the things I'd like to look at for 7.1 is start to split off the\ndistributions ... we're up to a 7meg distribution and growing ... \n\nWe should be able to do a pgsql-docs.tar.gz and pgsql-src.tar.gz at the\nvery least ... putting 'doc' in a seperate tar file would reduce the\nsize\nby ~3meg:\n\ntotal 3024\n-rw-r--r-- 1 scrappy wheel 2969424 Feb 22 15:27 doc.tar.gz\n\nActually, is there a reason we can't do this now? I can change the 'tar\nbuild' system so that we have split systems that way ... this would at\nleast safe testers from downloading 3meg worth of tar file that most\nlikely won't get touched often ...\n\nPM: I'm surprised we haven't done it earlier.\n\nI'm going to do this tonight, put it up and see what ppl thing ... if\nnothing else, it makes it easier for ppl to download smaller chunks ...\n\nPM: Agreed.\n",
"msg_date": "Wed, 23 Feb 2000 09:09:21 -0000",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Splitting distributions (Was: Re: [HACKERS] ECPG / Release)"
}
] |
[
{
"msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name : Rolf Grossmann\nYour email address : [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) : AMD-K6 300\n\n Operating System (example: Linux 2.0.26 ELF) : FreeBSD 3.4-STABLE\n\n PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.0beta1\n\n Compiler used (example: gcc 2.8.0) : gcc 2.95\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nFirst I'd like to say that I'm really impressed with the quality of this\nfirst beta release. Still, when I was trying to set up my old database,\nI ran into a bit of a problem: I couldn't specify NOT NULL PRIMARY KEY\nanymore. Removing the NOT NULL part solves the problem (and it's implied\nby PRIMARY KEY anyway), however all major databases allow that syntax\n(and upto the last release Postgresql did too), so I'd like to see it\nadded back.\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\nTry to create this table:\n\nCREATE TABLE Notes (\n Id INT NOT NULL PRIMARY KEY,\n Text VARCHAR(1024) NOT NULL\n);\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nThere is another problem with the regression tests: If the user running the\ntests has a .psqlrc file all regression tests fail, because commands from\nthat file are echoed to the result file. Additionally, it a transaction\nis started from that file, regression tests fail, because they include tests\nfor error cases and a transaction needs to be aborted after an error.\n\nA possible solution would probably be to add a flag to psql that inhibits\nreading the .psqlrc file and using that flag with the regression tests.\n\nOn a related note (not a bug of course ;))... would it be possible to add\nsome option to psql (or even libpq?) to always keep a transaction active?\n\nBye, Rolf\n",
"msg_date": "Wed, 23 Feb 2000 15:30:46 +0100 (CET)",
"msg_from": "Rolf Grossmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "First experiences with Postgresql 7.0"
},
{
"msg_contents": "Rolf Grossmann <[email protected]> writes:\n> I ran into a bit of a problem: I couldn't specify NOT NULL PRIMARY KEY\n> anymore.\n\nFor the moment try the other order: PRIMARY KEY NOT NULL. This is a\nknown parser deficiency that we chose to leave unfixed for the start of\nbeta, but it should be fixed for 7.0 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 11:56:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Hi,\n\non Wed, 23 Feb 2000 11:56:02 -0500 Tom Lane wrote \nconcerning \"Re: [BUGS] First experiences with Postgresql 7.0 \" something like this:\n\n> Rolf Grossmann <[email protected]> writes:\n>> I ran into a bit of a problem: I couldn't specify NOT NULL PRIMARY KEY\n>> anymore.\n\n> For the moment try the other order: PRIMARY KEY NOT NULL. \n\nThat doesn't work either.\n\n> This is a\n> known parser deficiency that we chose to leave unfixed for the start of\n> beta, but it should be fixed for 7.0 ...\n\nThat's good to hear.\n\nThanks, Rolf\n",
"msg_date": "Thu, 24 Feb 2000 01:53:12 +0100 (CET)",
"msg_from": "Rolf Grossmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Well, don't I look stupid here. Once upon a time I recall to have fixed\nexactly this issue, apparently it snuck back in.\n\nIf you run psql in non-interactive mode the psqlrc file shouldn't be read\nat all. Unless people want that flag, but I don't like that better.\n\nPerhaps this is a good time to ask when and how any fix to this should be\napplied.\n\n\nOn Wed, 23 Feb 2000, Rolf Grossmann wrote:\n\n> There is another problem with the regression tests: If the user running the\n> tests has a .psqlrc file all regression tests fail, because commands from\n> that file are echoed to the result file. Additionally, it a transaction\n> is started from that file, regression tests fail, because they include tests\n> for error cases and a transaction needs to be aborted after an error.\n> \n> A possible solution would probably be to add a flag to psql that inhibits\n> reading the .psqlrc file and using that flag with the regression tests.\n> \n> On a related note (not a bug of course ;))... would it be possible to add\n> some option to psql (or even libpq?) to always keep a transaction active?\n> \n> Bye, Rolf\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 13:47:36 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> There is another problem with the regression tests: If the user running the\n> tests has a .psqlrc file all regression tests fail, because commands from\n> that file are echoed to the result file. Additionally, it a transaction\n> is started from that file, regression tests fail, because they include tests\n> for error cases and a transaction needs to be aborted after an error.\n\nIncidentally, this should also be the behaviour of the old psql, so it\nshouldn't be all that surprising. Will be fixed of course, though.\n\n> On a related note (not a bug of course ;))... would it be possible to add\n> some option to psql (or even libpq?) to always keep a transaction active?\n\nThe backend would be the right place for this, and yes, it's possible, but\nthere seems to be some disagreement whether we should do it.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 14:08:42 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "Hi,\n\non Thu, 24 Feb 2000 13:47:36 +0100 (MET) Peter Eisentraut wrote \nconcerning \"Re: [BUGS] First experiences with Postgresql 7.0\" something like this:\n\n> Well, don't I look stupid here. Once upon a time I recall to have fixed\n> exactly this issue, apparently it snuck back in.\n\n> If you run psql in non-interactive mode the psqlrc file shouldn't be read\n> at all. Unless people want that flag, but I don't like that better.\n\nAfter doing some more experimenting, I noticed that psql does (indeed)\nnot read the psqlrc file when given the -f option. Alas, the regression\ntests don't use -f but send the file in via stdio. So I think this\nbehaviour is The Right Thing, but the regression tests should be fixed\n(probably to use -f).\n\nBye, Rolf\n",
"msg_date": "Thu, 24 Feb 2000 14:50:43 +0100 (CET)",
"msg_from": "Rolf Grossmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "On Thu, 24 Feb 2000, Rolf Grossmann wrote:\n\n> not read the psqlrc file when given the -f option. Alas, the regression\n> tests don't use -f but send the file in via stdio. So I think this\n> behaviour is The Right Thing, but the regression tests should be fixed\n> (probably to use -f).\n\nBut the output of \"-f\" vs \"<\" differs, in particular \"-f\" gives you error\nmessages like\npsql:inputfile:lineno: ERROR: ...\n\nand I believe no one wants to fix up the regression tests in that\ndirection, after we already did it once.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 14:58:23 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> Well, don't I look stupid here. Once upon a time I recall to have fixed\n> exactly this issue, apparently it snuck back in.\n> \n> If you run psql in non-interactive mode the psqlrc file shouldn't be read\n> at all. Unless people want that flag, but I don't like that better.\n> \n> Perhaps this is a good time to ask when and how any fix to this should be\n> applied.\n> \n> \n\nI see the same problem here. Also, the regression tests required me to\ndefine PGLIB.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 09:22:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> Hi,\n> \n> on Thu, 24 Feb 2000 13:47:36 +0100 (MET) Peter Eisentraut wrote \n> concerning \"Re: [BUGS] First experiences with Postgresql 7.0\" something like this:\n> \n> > Well, don't I look stupid here. Once upon a time I recall to have fixed\n> > exactly this issue, apparently it snuck back in.\n> \n> > If you run psql in non-interactive mode the psqlrc file shouldn't be read\n> > at all. Unless people want that flag, but I don't like that better.\n> \n> After doing some more experimenting, I noticed that psql does (indeed)\n> not read the psqlrc file when given the -f option. Alas, the regression\n> tests don't use -f but send the file in via stdio. So I think this\n> behaviour is The Right Thing, but the regression tests should be fixed\n> (probably to use -f).\n\nBut is it right to not read the psqlrc file with -f? Can psqlrc be\nread but not displayed with -q. regress.sh uses -a and -q, which seem\nto conflict with each other.\n\n -a Echo all input from script\n -q Run quietly (no messages, only query output)\n \nI will admit regress.sh may be using the wrong flags now. Also, PGLIB\nis used by createlang. Not sure how it used to work.\n\nCREATE DATABASE\n=============== installing PL/pgSQL... =================\ncreatelang: missing required argument PGLIB directory\n(This is the directory where the interpreter for the procedural\nlanguage is stored. Traditionally, these are installed in whatever\n'lib' directory was specified at configure time.)\ncreatelang failed\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 09:47:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> If you run psql in non-interactive mode the psqlrc file shouldn't be read\n> at all. Unless people want that flag, but I don't like that better.\n\n> Perhaps this is a good time to ask when and how any fix to this should be\n> applied.\n\nThis is arguably a bug fix, so you needn't worry about it being beta\nphase. However, there seems to be some doubt about exactly how it\n*should* work, so you should hold off until there is consensus.\n\nI take it you are considering \"only read psqlrc if stdin is a tty\",\nrather than providing a switch-selectable choice. I think that might\nbe too inflexible. The regression tests clearly need to be able to\ndisregard psqlrc, but ordinary users will very likely want to write\nscripts that depend on their psqlrc. (For sure, we will get bug reports\n\"this works by hand but not in a script\" that trace back to psqlrc\nsettings or lack of 'em.)\n\nUsing -f would work if you hadn't already overloaded it with another\nmeaning; but as you say I don't much want to add line numbers to all\nthe regress test expected outputs. (That would mean that\nadding/deleting lines in a test would create many bogus differences\nfurther down in its output, which would be a pain in the neck for the\ninitial hand-validation of the changed output.)\n\nSo I vote for a switch that suppresses reading psqlrc ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 10:28:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "> Using -f would work if you hadn't already overloaded it with another\n> meaning; but as you say I don't much want to add line numbers to all\n> the regress test expected outputs. (That would mean that\n> adding/deleting lines in a test would create many bogus differences\n> further down in its output, which would be a pain in the neck for the\n> initial hand-validation of the changed output.)\n> \n> So I vote for a switch that suppresses reading psqlrc ...\n> \n\nYes, but are there cases where we would want psqlrc values set? Should\nwe specifically set all the variables ourselves on startup, just\nover-riding what is in psqlrc?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 10:44:57 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yes, but are there cases where we would want psqlrc values set? Should\n> we specifically set all the variables ourselves on startup, just\n> over-riding what is in psqlrc?\n\nNo. In the first place, we've already got a dozen or two SET variables\n(and the list keeps changing); do you really want to reset all of those\nin each regress test? In the second place, a psqlrc script could screw\nthings up in more creative ways than just issuing SET commands. IIRC,\nRolf's original example was a psqlrc that issued a BEGIN to leave the\nsystem in an open-transaction state. In the third place, the psql echo\noutput from any commands issued by psqlrc would itself be enough to\ncause bogus \"failures\" of all the tests.\n\nOne advantage of using a switch is that if someone *did* want to\nexperiment with regress test behavior with non-default settings,\nhe could set up a psqlrc file and then remove that switch from\nthe regression driver script. Of course he'd have to ignore a\nlot of bogus differences...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 10:56:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "On Thu, 24 Feb 2000, Tom Lane wrote:\n\n> > Perhaps this is a good time to ask when and how any fix to this should be\n> > applied.\n> \n> This is arguably a bug fix, so you needn't worry about it being beta\n> phase.\n\nI'm not sure how this works now: Do I just commit it to the tree, so it\nwill be in when, say, beta2 gets generated?\n\n> I take it you are considering \"only read psqlrc if stdin is a tty\",\n\nThis is how shells work, that's always my default decision for unchartered\nterritory. (Of course psql is not a shell, but that's why we're discussing\n...)\n\n> Using -f would work if you hadn't already overloaded it with another\n> meaning;\n\nHuh, \"-f\" is not a new option. \"-f\" is different from \"<\" because of two\nreasons: 1) if they were the same, we wouldn't need one of them, and 2) a\nprogram should behave the same independent of what kind of device its\nstandard input comes from. (That's why \"<\" doesn't print out error\nmessages with line numbers.) This is an ideal state of course.\n\n[5 min later ...]\n\nAh, a tcsh user! ;) I could go for an -X option to suppress reading the\nstartup file, with default being that it is read in any mode. A pretty\ndump option letter, but not all that far-fetched.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 16:59:48 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yes, but are there cases where we would want psqlrc values set? Should\n> > we specifically set all the variables ourselves on startup, just\n> > over-riding what is in psqlrc?\n> \n> No. In the first place, we've already got a dozen or two SET variables\n> (and the list keeps changing); do you really want to reset all of those\n> in each regress test? In the second place, a psqlrc script could screw\n> things up in more creative ways than just issuing SET commands. IIRC,\n> Rolf's original example was a psqlrc that issued a BEGIN to leave the\n> system in an open-transaction state. In the third place, the psql echo\n> output from any commands issued by psqlrc would itself be enough to\n> cause bogus \"failures\" of all the tests.\n> \n\nYes, I see. Just asking.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 11:00:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "On Thu, 24 Feb 2000, Bruce Momjian wrote:\n\n> I see the same problem here. Also, the regression tests required me to\n> define PGLIB.\n\nIs that because of createlang or initdb or both? Which regression driver?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 17:03:41 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> On Thu, 24 Feb 2000, Bruce Momjian wrote:\n> \n> > I see the same problem here. Also, the regression tests required me to\n> > define PGLIB.\n> \n> Is that because of createlang or initdb or both? Which regression driver?\n\nSee later message. createlang is causing it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 11:08:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> This is arguably a bug fix, so you needn't worry about it being beta\n>> phase.\n\n> I'm not sure how this works now: Do I just commit it to the tree, so it\n> will be in when, say, beta2 gets generated?\n\nRight. No real difference in commit procedures at this point.\n\nAt some point after the release, we will set up a branch for REL_7.0,\nand after that, ordinary commits will only apply to new development\nfor 7.1, not to the stable release branch. But no need to worry about\nthat for now.\n\n> Ah, a tcsh user! ;) I could go for an -X option to suppress reading the\n> startup file, with default being that it is read in any mode. A pretty\n> dump option letter, but not all that far-fetched.\n\nWorks for me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 11:11:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> On Thu, 24 Feb 2000, Bruce Momjian wrote:\n> \n> > I see the same problem here. Also, the regression tests required me to\n> > define PGLIB.\n> \n> Is that because of createlang or initdb or both? Which regression driver?\n\nCreatelang has done this for some time -- at least since I've been\npackaging the RPM's with the regression tests. I have had to define\nPGLIB in regress.sh -- otherwise, createlang doesn't know where to find\nthe pl .so. As my 7.0 installation is at home, I can't check the 7.0\nregress.sh from here -- however, the 6.5.x regress.sh did it's own PGLIB\ndefinition.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 24 Feb 2000 11:13:51 -0500",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> On Thu, 24 Feb 2000, Tom Lane wrote:\n> \n> > > Perhaps this is a good time to ask when and how any fix to this should be\n> > > applied.\n> > \n> > This is arguably a bug fix, so you needn't worry about it being beta\n> > phase.\n> \n> I'm not sure how this works now: Do I just commit it to the tree, so it\n> will be in when, say, beta2 gets generated?\n\nBetas are not static releases. We live in beta for over a month, with\npeople making changes to fix user problems.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 11:25:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "> Peter Eisentraut wrote:\n> > \n> > On Thu, 24 Feb 2000, Bruce Momjian wrote:\n> > \n> > > I see the same problem here. Also, the regression tests required me to\n> > > define PGLIB.\n> > \n> > Is that because of createlang or initdb or both? Which regression driver?\n> \n> Createlang has done this for some time -- at least since I've been\n> packaging the RPM's with the regression tests. I have had to define\n> PGLIB in regress.sh -- otherwise, createlang doesn't know where to find\n> the pl .so. As my 7.0 installation is at home, I can't check the 7.0\n> regress.sh from here -- however, the 6.5.x regress.sh did it's own PGLIB\n> definition.\n\nFor some reason, I didn't need it until recently.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 11:42:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "Hi,\n\non Thu, 24 Feb 2000 16:59:48 +0100 (MET) Peter Eisentraut wrote \nconcerning \"Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 \" something like this:\n\n> (Of course psql is not a shell, but that's why we're discussing ...)\n\nNow, be careful with this statement. Personally, I have already tried to\nuse psql as a shell and I think it would be really cool if you could just\nwrite #!/path/to/psql -f to write sql scripts.\nHowever, that would require psql to treat # as a comment starter and we're\nmoving away from SQL standards with that. So I'm a bit weary of suggesting\nsuch a thing.\n\n>> Using -f would work if you hadn't already overloaded it with another\n>> meaning;\n\n> [5 min later ...]\n\n> Ah, a tcsh user! ;) I could go for an -X option to suppress reading the\n> startup file, with default being that it is read in any mode. A pretty\n> dump option letter, but not all that far-fetched.\n\nUhm ... my tcsh manual describes those options differently:\n\n -f The shell ignores ~/.tcshrc, and thus starts faster.\n -X Is to -x as -V is to -v.\n\nOf course, as we have noted above, psql is not a shell, so I wonder if\nthat's the way to go. Personally, I'd say just pick a letter.\n\nBye, Rolf\n",
"msg_date": "Thu, 24 Feb 2000 17:44:15 +0100 (CET)",
"msg_from": "Rolf Grossmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Rolf Grossmann <[email protected]> writes:\n>> (Of course psql is not a shell, but that's why we're discussing ...)\n\n> Now, be careful with this statement. Personally, I have already tried to\n> use psql as a shell and I think it would be really cool if you could just\n> write #!/path/to/psql -f to write sql scripts.\n\n[ straying off-topic ]\n\nHave you tried pgbash? I haven't, but it sounds pretty cool if you\nthink psql and your shell should be the same thing...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 11:48:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> PGLIB in regress.sh -- otherwise, createlang doesn't know where to find\n>> the pl .so. As my 7.0 installation is at home, I can't check the 7.0\n>> regress.sh from here -- however, the 6.5.x regress.sh did it's own PGLIB\n>> definition.\n\n> For some reason, I didn't need it until recently.\n\nI have PGDATA and PGLIB defined in .profile for my postgres account,\nso I wouldn't have noticed whether the regress tests need it or not :-(\nPossibly the same is true for most of the other developers.\n\nIIRC, \"make all\" to set up the regress tests also needs PGLIB to be set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 12:03:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "On Thu, 24 Feb 2000, Bruce Momjian wrote:\n\n> > Using -f would work if you hadn't already overloaded it with another\n> > meaning; but as you say I don't much want to add line numbers to all\n> > the regress test expected outputs. (That would mean that\n> > adding/deleting lines in a test would create many bogus differences\n> > further down in its output, which would be a pain in the neck for the\n> > initial hand-validation of the changed output.)\n> > \n> > So I vote for a switch that suppresses reading psqlrc ...\n> > \n> \n> Yes, but are there cases where we would want psqlrc values set? Should\n> we specifically set all the variables ourselves on startup, just\n> over-riding what is in psqlrc?\n\nIMHO, the regression tests are based on a snapshot where psql is in a\n'default state' ... why would we want psqlrc values set?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 24 Feb 2000 19:04:18 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "On Thu, 24 Feb 2000, Peter Eisentraut wrote:\n\n> On Thu, 24 Feb 2000, Tom Lane wrote:\n> \n> > > Perhaps this is a good time to ask when and how any fix to this should be\n> > > applied.\n> > \n> > This is arguably a bug fix, so you needn't worry about it being beta\n> > phase.\n> \n> I'm not sure how this works now: Do I just commit it to the tree, so it\n> will be in when, say, beta2 gets generated?\n\nbug fixes, yes ... but posting a patch against the current beta1 as a sort\nof \"here's the fix\" would be very appreciated. \n\n\n",
"msg_date": "Thu, 24 Feb 2000 19:05:49 -0400 (AST)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "On 2000-02-24, Lamar Owen mentioned:\n\n> Peter Eisentraut wrote:\n> > \n> > On Thu, 24 Feb 2000, Bruce Momjian wrote:\n> > \n> > > I see the same problem here. Also, the regression tests required me to\n> > > define PGLIB.\n> > \n> > Is that because of createlang or initdb or both? Which regression driver?\n> \n> Createlang has done this for some time\n\nYou must provide an -L (--pglib) option to createlang, just as for\ninitdb. I'm committing a fix for this.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 25 Feb 2000 00:39:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0"
},
{
"msg_contents": "On 2000-02-24, Rolf Grossmann mentioned:\n\n> use psql as a shell and I think it would be really cool if you could just\n> write #!/path/to/psql -f to write sql scripts.\n\nI considered that briefly, but dismissed it equally fast. psql is a shell\nto the PostgreSQL backend, if you will, not to the system. It's optimized\nas a batch processor and for being called from shell scripts, not for\nbeing a programming language of it's own. (In the future it would be nice\nto have a PL/Pgsql based front-end available for that sort of stuff.)\n\n> Uhm ... my tcsh manual describes those options differently:\n> \n> -f The shell ignores ~/.tcshrc, and thus starts faster.\n> -X Is to -x as -V is to -v.\n\nI wasn't actually implying to have picked -X in accordance with tcsh, I\nwas just confused about how Tom talked about -f.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 25 Feb 2000 00:39:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Uhm ... my tcsh manual describes those options differently:\n>> \n>> -f The shell ignores ~/.tcshrc, and thus starts faster.\n>> -X Is to -x as -V is to -v.\n\n> I wasn't actually implying to have picked -X in accordance with tcsh, I\n> was just confused about how Tom talked about -f.\n\nOh, sorry, I just meant that -f already has one special behavior in\naddition to just physically selecting the input source, namely\ncausing line numbers to get attached to error messages. That's fine,\nbut adding two special behaviors that aren't really closely related\nto the same switch is not so great.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 19:35:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
},
{
"msg_contents": "Hi,\n\non Fri, 25 Feb 2000 00:39:15 +0100 (CET) Peter Eisentraut wrote \nconcerning \"Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 \"\nsomething like this:\n\n>> use psql as a shell and I think it would be really cool if you could just\n>> write #!/path/to/psql -f to write sql scripts.\n\n> I considered that briefly, but dismissed it equally fast. psql is a shell\n> to the PostgreSQL backend, if you will, not to the system. It's optimized\n> as a batch processor and for being called from shell scripts, not for\n> being a programming language of it's own. (In the future it would be nice\n> to have a PL/Pgsql based front-end available for that sort of stuff.)\n\nWell, if you're saying psql is a shell, then maybe we should consider moving\nin that direction. Not everything that's called with #! is a shell to the\nsystem. The most notable example is probably perl, but there are other\nprograms like sed or awk that are being used with #! but certainly nobody\never considered using awk as a system shell ;)\n\nAs for the programming language: You're already going in that direction\nby implementing something like pl/sql. Now if that was available from\npsql you're already way down the programming language road.\n\nJust some thoughts ...\nRolf\n",
"msg_date": "Fri, 25 Feb 2000 02:07:13 +0100 (CET)",
"msg_from": "Rolf Grossmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: [BUGS] First experiences with Postgresql 7.0 "
}
] |
[
{
"msg_contents": "At 10:06 AM 2/23/00 +0100, Zeugswetter Andreas SB wrote:\n\n>In this sense a commit is not partial. The commit should commit\n>all statements that were not in error. \n>All other DB's behave in this way.\n\nIn other words, then, Postgres transactions are 100% non-standard.\n\nInteresting.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 07:30:28 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] TRANSACTIONS "
}
] |
[
{
"msg_contents": "I am writing a book and don't have time for additional writing at this\ntime. I am CC'ing the hackers list to see if anyone can do it.\n\n> Bruce,\n> \n> My name is James Chalex -- I'm an acquisitions editor with\n> informit.com. I work on our Linux subsite, and am interested in\n> having you write an article on using PostgreSQL with Linux. This\n> could be as simple as an introduction and a installation guide,\n> or something more focused, like administrative tips, or maybe\n> a scripting article for using PHP, Python, etc.\n> \n> I'm very open to your ideas as well -- ideally I'd like something\n> that is new and/or has caused problems for people in the past.\n> Our preferred audience member would already have a good deal of\n> database experience, but is still looking to master subtleties.\n> \n> We pay anywhere from $250 to $500 per article, depending on\n> length, scope, etc.\n> \n> If for whatever reason you're not interested, I would greatly\n> appreciate it if you could pass this along to someone who would\n> be interested in this kind of work.\n> \n> I look forward to hearing from you,\n> \n> James\n> \n> \n> James Chalex - Acquisitions Editor, InformIT [email protected]\n> www.informit.com phone: 317.817.7489 free : 800.545.5914 fax\n> : 317.817.7232\n> \n> InformIT 201 West 103rd Street Indianapolis, IN 46290\n> \n> \n\n\n--\n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 23 Feb 2000 10:58:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interested in writing a PostgreSQL article?"
},
{
"msg_contents": "Hi Bruce, James,\n\nI would be interested in writing a PHP / PostgreSQL basic\ntutorial. Please contact me at +902 542 0713.\n\nJeff\n\nOn Wed, 23 Feb 2000, Bruce Momjian wrote:\n\n> I am writing a book and don't have time for additional writing at this\n> time. I am CC'ing the hackers list to see if anyone can do it.\n> \n> > Bruce,\n> > \n> > My name is James Chalex -- I'm an acquisitions editor with\n> > informit.com. I work on our Linux subsite, and am interested in\n> > having you write an article on using PostgreSQL with Linux. This\n> > could be as simple as an introduction and a installation guide,\n> > or something more focused, like administrative tips, or maybe\n> > a scripting article for using PHP, Python, etc.\n> > \n> > I'm very open to your ideas as well -- ideally I'd like something\n> > that is new and/or has caused problems for people in the past.\n> > Our preferred audience member would already have a good deal of\n> > database experience, but is still looking to master subtleties.\n> > \n> > We pay anywhere from $250 to $500 per article, depending on\n> > length, scope, etc.\n> > \n> > If for whatever reason you're not interested, I would greatly\n> > appreciate it if you could pass this along to someone who would\n> > be interested in this kind of work.\n> > \n> > I look forward to hearing from you,\n> > \n> > James\n> > \n> > \n> > James Chalex - Acquisitions Editor, InformIT [email protected]\n> > www.informit.com phone: 317.817.7489 free : 800.545.5914 fax\n> > : 317.817.7232\n> > \n> > InformIT 201 West 103rd Street Indianapolis, IN 46290\n> > \n> > \n> \n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nJeff MacDonald\[email protected]\n\n",
"msg_date": "Wed, 23 Feb 2000 22:21:27 -0400 (AST)",
"msg_from": "Jeff MacDonald <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Interested in writing a PostgreSQL article?"
}
] |
[
{
"msg_contents": "We've already seen how column alias were breaking pg_dump's ability to\nrestore views unless a table alias were created, fixed now thanks to\nTom's hack.\n\nHere's an observation that's not really a bug report but which is\nproving to be an annoyance.\n\nThe creation of column aliases for tables referenced by views causes\nthe rule created on the underlying virtual table to be in some cases\nconsiderably longer than the corresponding rule in V6.5.\n\nIn practice, this means that several of the views used in the web\ntoolkit I'm porting no longer can be created. In some cases, the\nviews had changed and I'd assumed that this was the cause, but now\nI'm seeing it in a module (ecommerce) that as yet has not been\nported. I'd ported the data model to 6.5 with no problem, but the\nviews can't be created in 7.0. I just tried this yesterday, when\nI decided to put some effort into porting the module (it contains\nabout 2000 lines of PL/SQL which need to be re-written in PL/pgSQL\nso it's not entirely a trivial task to move it over).\n\nSeeing that these views - which hadn't changed - and in light of\nthe column alias vs. pg_dump issue, I realized that the rule\nstrings are just getting much longer. \n\n(The error I'm getting is that the tuple size is too long)\n\nOf course, TOAST will solve the problem, but we don't have TOAST\nyet. \n\nI'm assuming Thomas put this in as part of the 'outer join' work.\n\nIn my case, I recompiled PG with a blocksize of 16KB rather\nthan 8KB, which I've been intending to do anyway for the time \nbeing since the 8KB blocksize causes other limitations on the size\nof text vars, i.e. the discussion forum table is limited to about\n6KB chars for the message text when the blocksize is 8KB, really\ntoo small. With TOAST coming in 7.1, I'm sticking with \"text\"\nrather than segmenting messages into a series of rows and kludging\na \"solution\" by compiling with a 16KB blocksize.\n\nThis \"fixed\" my problem with views, too.\n\nBut I thought I'd share my experience with the group. I don't\nknow how many folks use views in complex ways, but if many do\nquite a few of them will run into the same problem and we'll\nprobably hear about it.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 08:17:25 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "interesting observatation regarding views and V7.0"
},
{
"msg_contents": "At 11:04 AM 2/23/00 -0600, Ed Loehr wrote:\n>Don Baccus wrote:\n>> \n>> The creation of column aliases for tables referenced by views causes\n>> the rule created on the underlying virtual table to be in some cases\n>> considerably longer than the corresponding rule in V6.5.\n>> \n>> ...In my case, I recompiled PG with a blocksize of 16KB...\n>> \n>> ...This \"fixed\" my problem with views, too.\n>\n>Thanks for this info, Don. Would you mind posting your patch, simple\n>as it may be?\n\nThat was it, I just recompiled PG with a blocksize of 16KB, i.e.\nedited src/include/config.h.in's BLCKSZ definition, ran configure,\nand did a gmake all/gmake install.\n\nAs I mentioned, I had other reasons for wanting to run with a 16KB\nblocksize while waiting for TOASTed large text (and other) types,\nso it's no biggie for me.\n\nOthers might find this change a lot more annoying.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 09:04:31 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and\n V7.0"
},
{
"msg_contents": "Don Baccus wrote:\n> \n> The creation of column aliases for tables referenced by views causes\n> the rule created on the underlying virtual table to be in some cases\n> considerably longer than the corresponding rule in V6.5.\n> \n> ...In my case, I recompiled PG with a blocksize of 16KB...\n> \n> ...This \"fixed\" my problem with views, too.\n\nThanks for this info, Don. Would you mind posting your patch, simple\nas it may be?\n\nCheers,\nEd Loehr\n",
"msg_date": "Wed, 23 Feb 2000 11:04:57 -0600",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n> The creation of column aliases for tables referenced by views causes\n> the rule created on the underlying virtual table to be in some cases\n> considerably longer than the corresponding rule in V6.5.\n> In practice, this means that several of the views used in the web\n> toolkit I'm porting no longer can be created.\n\nYes, this is exactly the concern I raised last week. Thomas didn't\nseem to be very worried about the issue, but when he gets back from\nhis vacation we can lean on him to fix it.\n\nSomething else we might consider as a stopgap is to resurrect the\n\"compressed text\" datatype that Jan wrote, and then removed in\nanticipation of having TOAST. Jan was concerned about creating\nfuture compatibility problems by having a datatype with only a\none-release-cycle expected lifetime ... but I think it might be\nOK to use it just internally for rules.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 17:54:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0 "
},
{
"msg_contents": "At 05:54 PM 2/23/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> The creation of column aliases for tables referenced by views causes\n>> the rule created on the underlying virtual table to be in some cases\n>> considerably longer than the corresponding rule in V6.5.\n>> In practice, this means that several of the views used in the web\n>> toolkit I'm porting no longer can be created.\n>\n>Yes, this is exactly the concern I raised last week. Thomas didn't\n>seem to be very worried about the issue, but when he gets back from\n>his vacation we can lean on him to fix it.\n\nOK, I saw some of the exchange last week but was so busy I\ndidn't really read it, other than to note when he'd committed changes\nso I could update and throw the web toolkit at them. The ecommerce\nmodule wasn't part of what I was throwing at it last week since\nI knew it wasn't going to get ported from Oracle in time for our\nvery preliminary first cut at a port. This week, though, hasn't\nbeen as crazy. Otherwise I would've yelped at Thomas a week ago.\n\n\"Here, YOU rewrite all these queries that use these views!\" :)\n\n>Something else we might consider as a stopgap is to resurrect the\n>\"compressed text\" datatype that Jan wrote, and then removed in\n>anticipation of having TOAST. Jan was concerned about creating\n>future compatibility problems by having a datatype with only a\n>one-release-cycle expected lifetime ... but I think it might be\n>OK to use it just internally for rules.\n\nYeah, that's not a bad idea at all. \n\nAlso...interbase's \"text\" type is apparently compressed, and that's\nan interesting idea for \"text\" itself (as opposed to \"varchar()\" of\na given size). Someone who just says \"text\" probably wants to be\nable to stuff as much text into the column as possible, I know\nI do! The price of compression/decompression is to some extent\nbalanced by not having to drag as many bytes around during joins\nand sorts and the like. Decompression in particular should be\nvery cheap and in the kind of systems I'm working on one hopes\none's ad, product description, Q&A post etc is selected (read)\nmany more times than inserted (written). One hopes! \n\nJust an interesting notion...I was kinda excited about lzText when\nJan implemented it, though a smart TOASTer is even more exciting so\nI won't whine about the delay.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 15:17:18 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and\n V7.0"
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> Something else we might consider as a stopgap is to resurrect the\n>> \"compressed text\" datatype that Jan wrote, and then removed in\n>> anticipation of having TOAST.\n\n> Also...interbase's \"text\" type is apparently compressed, and that's\n> an interesting idea for \"text\" itself (as opposed to \"varchar()\" of\n> a given size). Someone who just says \"text\" probably wants to be\n> able to stuff as much text into the column as possible, I know\n> I do!\n\nJust quietly make text compressed-under-the-hood, you mean? Hmm.\nInteresting idea, all right, and it wouldn't create any long-term\ncompatibility problem since users couldn't see it directly. I think\nwe might have some places in the system that assume char/varchar/text\nall have the same internal representation, but that could probably\nbe fixed without too much grief.\n\n> The price of compression/decompression is to some extent\n> balanced by not having to drag as many bytes around during joins\n> and sorts and the like.\n\nAlso, there could be a threshold: don't bother trying to compress\nfields that are less than, say, 1K bytes.\n\nJan, what do you think? I might be able to find some time to try this,\nif you approve of the idea but just don't have cycles to spare.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 18:46:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0 "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Something else we might consider as a stopgap is to resurrect the\n> \"compressed text\" datatype that Jan wrote, and then removed in\n> anticipation of having TOAST. Jan was concerned about creating\n> future compatibility problems by having a datatype with only a\n> one-release-cycle expected lifetime ... but I think it might be\n> OK to use it just internally for rules.\n\n Ech - must be YOU!\n\n If I hadn't deleted the entire (including catalog changes for\n pg_type ... pg_rewrite) patch, I'd be the one to suggest. We\n could easily add some warning, like \"LZTEXT will disappear in\n a subsequent release again - be warned\", spit out during\n parse, if someone explicitly uses the lztext type.\n\n I'll spend some time with CVS to see if I can regenerate the\n patch from there.\n\n But I can feel the punches of Marc already - this patch will\n cause catalog changes after official BETA start - Uh - Oh.\n\n\nJan\n\nBTW: Good chance for Vince to LOL if I fail on that one, since I\n got very impatiant once about \"correct usage of CVS\". Was a\n little off-list flamewar that turned out to be mostly \"you\n got me wrong\" during a phone call. But things like that\n linger in background until prooven :-). Showtime!\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 24 Feb 2000 01:04:54 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Don Baccus <[email protected]> writes:\n>\n> > Also...interbase's \"text\" type is apparently compressed, and that's\n> > an interesting idea for \"text\" itself (as opposed to \"varchar()\" of\n> > a given size). Someone who just says \"text\" probably wants to be\n> > able to stuff as much text into the column as possible, I know\n> > I do!\n>\n> Just quietly make text compressed-under-the-hood, you mean? Hmm.\n> Interesting idea, all right, and it wouldn't create any long-term\n> compatibility problem since users couldn't see it directly. ...\n\n If we wheren't in BETA code freeze right now, I'd call for\n another month delay - surely.\n\n> > The price of compression/decompression is to some extent\n> > balanced by not having to drag as many bytes around during joins\n> > and sorts and the like.\n>\n> Also, there could be a threshold: don't bother trying to compress\n> fields that are less than, say, 1K bytes.\n>\n> Jan, what do you think? I might be able to find some time to try this,\n> if you approve of the idea but just don't have cycles to spare.\n\n It's a very temping solution, turn \"text\" into \"lztext\"\n silently, and revert that internal changes in the next\n release again while implementing TOAST. Remember that the\n lztext I implemented had the mentioned threshold paramenter -\n say 256 - from the very beginning. And you know 256->1K is a\n one-liner in my coding style. Moreover, it was a global\n parameter set driven value, and thus potentially prepared to\n be a runtime configurable one (the other values of the\n parameter set where minimum compression ratio to gain,\n maximum result size to force compression even if ratio below,\n GOOD size to stop history lookup and finally history lookup\n GOOD lowering factor during lookups).\n\n The algorithm I used for compression is one, loosing possible\n compression ratio to gain speed. It uses a poor XOR\n combination of the next 4 input-bytes, to lookup a history\n table - and that's anything but perfect from a hashing\n algorithms point of view. But it was enough to make a 50+\n column view fit easily into pg_rewrite. And that's what it\n was made for.\n\n Anyway, there are far too many direct references to VARDATA\n on \"text\" plus all the assumptions on binary compatibility\n between text, varchar etc. in the code, to start on it during\n BETA.\n\n Thus, I see a good chance for a 7.1 release, really soon\n after 7.0. Then have a longer delay for the next one,\n featuring TOAST.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 24 Feb 2000 02:06:39 +0100 (CET)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0"
},
{
"msg_contents": "At 06:46 PM 2/23/00 -0500, Tom Lane wrote:\n\n>Just quietly make text compressed-under-the-hood, you mean? Hmm.\n\nYep...\n\n>Interesting idea, all right, and it wouldn't create any long-term\n>compatibility problem since users couldn't see it directly. I think\n>we might have some places in the system that assume char/varchar/text\n>all have the same internal representation, but that could probably\n>be fixed without too much grief.\n\nI've kind of assumed this might be the case, but have truly been\ntoo busy to dig around looking (which in my case takes a fairly\nlong time because I'm really only barely familiar with the code)\n\n>> The price of compression/decompression is to some extent\n>> balanced by not having to drag as many bytes around during joins\n>> and sorts and the like.\n>\n>Also, there could be a threshold: don't bother trying to compress\n>fields that are less than, say, 1K bytes.\n\nRight, I thought about that possibility, too, but it seems a bit\nmore complicated so I thought I'd raise the simpler-sounding idea\nfirst :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 17:15:20 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and\n V7.0"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> But I can feel the punches of Marc already - this patch will\n> cause catalog changes after official BETA start - Uh - Oh.\n\nYou can hide behind me ;-) ... I just did commit some catalog changes\n(but didn't need to force initdb, since they were only additions).\n\nAlso, I am more than half expecting that I will have to force an initdb\nto clean up the INET/CIDR comparison business; very likely we are\ngoing to end up needing to have separate comparison operators for\nINET and CIDR.\n\nStill waiting for input on that from the folks who use the datatypes,\nthough. (D'Arcy, are you still out there?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 21:14:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0 "
},
{
"msg_contents": "> Yes, this is exactly the concern I raised last week. Thomas didn't\n> seem to be very worried about the issue, but when he gets back from\n> his vacation we can lean on him to fix it.\n\nOK Tom I'll try to sound more concerned next time :))\n\nI'm using the rte->ref Attr structure to carry internal info on table\nnames and column names. What I should be able to do is decouple the\ninternal ref structure from the table name/column list specified by a\nuser, so the \"query recreation\" code can ignore the internal structure\nand just use the original list from the user.\n\nShould be able to go into v7.0 with no problem (other than initdb, but\nit *is* a beta!!).\n\n> Something else we might consider as a stopgap is to resurrect the\n> \"compressed text\" datatype that Jan wrote, and then removed in\n> anticipation of having TOAST. Jan was concerned about creating\n> future compatibility problems by having a datatype with only a\n> one-release-cycle expected lifetime ... but I think it might be\n> OK to use it just internally for rules.\n\nNaw, the above should be easier all around.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 28 Feb 2000 15:52:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Something else we might consider as a stopgap is to resurrect the\n>> \"compressed text\" datatype that Jan wrote, and then removed in\n>> anticipation of having TOAST.\n\n> Naw, the above should be easier all around.\n\nWhen you finish catching up on your mail, you'll find lztext is already\nback in ;-). At this point, whether you change the representation is\npretty much irrelevant for rule size, I think. However, I am still\nconcerned by the hack I had to put into ruleutils.c to get pg_dump\nto produce valid output for cases like\n\tcreate view foo as select * from int8_tbl;\nSee the note and code at about line 1000 of utils/adt/ruleutils.c.\nIdeally we want to be able to tell from the parsetree whether the user\nwrote any column aliases or not (and if possible, distinguish the ones\nhe wrote from any that got added by the system). So that may force\na representation change anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Feb 2000 11:36:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0 "
},
{
"msg_contents": "> When you finish catching up on your mail, you'll find lztext is already\n> back in ;-). At this point, whether you change the representation is\n> pretty much irrelevant for rule size, I think. However, I am still\n> concerned by the hack I had to put into ruleutils.c to get pg_dump\n> to produce valid output for cases like\n> create view foo as select * from int8_tbl;\n> See the note and code at about line 1000 of utils/adt/ruleutils.c.\n> Ideally we want to be able to tell from the parsetree whether the user\n> wrote any column aliases or not (and if possible, distinguish the ones\n> he wrote from any that got added by the system). So that may force\n> a representation change anyway.\n\nWell, if I add another field/list to the RangeTblEntry structure to\nhold my working aliases, and if I keep the ref structure as a pristine\ncopy of the parameters specified by the user, then everything will go\nback to working as expected. There may be other places in the code\nwhich really want one or the other of the fields, but as a first cut\nI'll isolate the changes to just the parser directory, more or less.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 29 Feb 2000 06:43:03 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Well, if I add another field/list to the RangeTblEntry structure to\n> hold my working aliases, and if I keep the ref structure as a pristine\n> copy of the parameters specified by the user, then everything will go\n> back to working as expected. There may be other places in the code\n> which really want one or the other of the fields, but as a first cut\n> I'll isolate the changes to just the parser directory, more or less.\n\nSounds like a good plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Feb 2000 09:59:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] interesting observatation regarding views and V7.0 "
}
] |
[
{
"msg_contents": "Here is my list of 7.0 changes. Please let me know of any changes I\nshould make to it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nThis release shows the continued growth of PostgreSQL. There are more\nupdated items in 7.0 than in any previous release. Don't be concerned\nthis is a dot-zero release. PostgreSQL does its best to put\nout only solid releases, and this one is no exception.\n\nMajor changes in this release:\n\nForeign Keys: Foreign keys are now implemented, with the exception of\nPARTIAL MATCH foreign keys. Many users have been asking for this\nfeature, and are pleased to finally offer it.\n\nOptimizer Overhaul: Continuing on work started a year ago, the\noptimizer has been overhauled in many significant ways, allowing better\nquery execution processing with faster performance and less memory\nusage.\n\nUpdated psql: psql, our interactive terminal monitor, has been updated,\nwith a variety of new features. See the psql manual page for the details.\n\nUpcoming Features: In 7.1, we plan to have outer joins, storage for very long\nrows, and a write-ahead logging system.\n\nBug Fixes\n---------\nPrevent function calls with more than maximum number of arguments (Tom)\nMany fixes for CASE (Tom)\nMany array fixes (Tom)\nFix SELECT coalesce(f1,0) FROM int4_tbl GROUP BY f1 (Tom)\nFix SELECT sentence.words[0] FROM sentence GROUP BY sentence.words[0] (Tom)\nAllow utility statements in plpgsql (Tom)\nFix GROUP BY scan bug (Tom)\nOptimize btree searching for cases where many equal keys exist (Tom)\nAllow bare column names to be subscripted as arrays (Tom)\nImprovements in SQL grammar processing(Tom)\nFix for views involved in INSERT ... SELECT ... (Tom)\nFix for SELECT a/2, a/2 FROM test_missing_target GROUP BY a/2 (Tom)\nFix for subselects in INSERT ... SELECT (Tom)\nPrevent INSERT ... SELECT ... ORDER BY (Tom)\nImprove type casting of int and float constants (Tom)\nCleanups for int8 inputs, range checking, and type conversion (Tom)\nFix for SELECT timespan('21:11:26'::time) (Tom)\nFixes for relations greater than 2GB, including vacuum\nImprove communication of system table changes to other running backends (Tom)\nImprove communication of user table modifications to other running backends (Tom)\nFix handling of temp tables in complex situations (Bruce, Tom)\nDisallow DROP TABLE/DROP INDEX inside a transaction block\nPrevent exponential space consumption with many AND's and OR's (Tom)\nCollect attribute selectivity values for system columns (Tom)\nAllow table locking when tables opened, improving concurrent reliability (Tom)\nFix for netmask('x.x.x.x/0') is 255.255.255.255 instead of 0.0.0.0 \n\t(Oleg Sharoiko)\nProperly quote sequence names in pg_dump (Ross J. Reedstrom)\nPrevent DESTROY DATABASE while others accessing\nPrevent any rows from being returned by GROUP BY if no rows processed (Tom)\nReduce memory usage of aggregates (Tom)\nFix SELECT COUNT(1) FROM table WHERE ...' if no rows matching WHERE (Tom)\nFix pg_upgrade so it works for MVCC(Tom)\nAdd nbtree operator class for NUMERIC(Jan)\nFix for SELECT ... WHERE x IN (SELECT ... HAVING SUM(x) > 1) (Tom)\nMake TABLE optional keyword in LOCK TABLE (Bruce)\nFix for \"f1 datetime default 'now'\" (Tom)\nAllow comment-only lines, and ;;; lines too. (Tom)\nImprove recovery after failed disk writes, disk full (Hiroshi)\nFix cases where table is mentioned in FROM but not joined (Tom)\nAllow HAVING clause without aggregate functions (Tom)\nFix for \"--\" comment and no trailing newline, as seen in Perl\nImprove pg_dump failure error reports (Bruce)\nPerl fix for BLOBs containing NUL characters (Douglas Thomson) \nAllow sorts and hashes to exceed 2GB file sizes (Tom)\nODBC fix for for large objects (free)\nFix for pg_dump dumping of inherited rules (Tom)\nFix for NULL handling comparisons (Tom)\nFix inconsistent state caused by failed CREATE/DROP commands\nFix for dbname with dash\nFix problems with CURRENT_DATE used in DEFAULT (Tom)\nPrevent DROP INDEX from interfering with other backends (Tom)\nFix file descriptor leak in verify_password()\nFix for \"Unable to identify an operator =$\" problem\nFix ODBC so no segfault if CommLog and Debug enabled (Dirk Niggemann)\nFix for recursive exit call (Massimo)\nFix indexing of cidr\nFix for extra-long timezones (Jeroen van Vianen)\nMake pg_dump preserve primary key information (Peter E)\nPrevent databases with single quotes (Peter E)\nPrevent DROP DATABASE inside transaction (Peter E)\necpg memory leak fixes (Stephen Birch)\nFix for Ethernet MAC addresses (macaddr type) comparisons\nFix for SELECT null::text, SELECT int4fac(null) and SELECT 2 + (null) (Tom)\nFix for LIKE optimization to use indexes with multi-byte encodings (Tom)\nY2K timestamp fix (Massimo)\nFix for date/time types when overflows happened in computations (Tom)\nFix for VACUUM 'HEAP_MOVED_IN was not expected' errors (Tom)\nFix for views with tables/columns containing spaces (Tom)\nAllow array on int8 (Peter E)\nPrevent permissions on indexes (Peter E)\nFix for rounding/overflow of NUMERIC type, like NUMERIC(4,4) (Tom)\nFix for spinlock stuck problem when error is generated (Hiroshi)\nAllow NUMERIC arrays\nFix ipcclean on Linux\nFix handling of NULL constraint conditions (Tom)\nFix bugs in NUMERIC ceil() and floor() functions (Tom)\nMake char_length()/octet_length including trailing blanks (Tom)\nMade abstime/reltime use int4 instead of time_t (Peter E)\nFix memory leak in odbc driver (Nick Gorham)\nFix r-tree index optimizer selectivity (Thomas)\n\nEnhancements\n------------\nNew CLI interface include file sqlcli.h, based on SQL3/SQL98\nRemove all limits on query length, row length limit still exists (Tom)\nImprove optimizer selectivity computations and functions (Tom)\nEnable fast LIKE index processing only if index present (Tom)\nRevise parse_coerce() to handle coercion of int and float constants (Tom)\nRe-use free space on index pages with duplicates (Tom)\nImprove hash join processing (Tom)\nPrevent descending sort if result is already sorted(Hiroshi)\nAllow commuting of index scan query qualifications (Tom)\nPrefer index scans in cases where ORDER BY/GROUP BY is required (Tom)\nAllocate large memory requests in fix-sized chunks for performance (Tom)\nFix vacuum's performance by reducing memory allocation requests (Tom)\nUpdate jdbc protocol to 2.0 (Jens Glaser [email protected])\nAdd TRUNCATE command to quickly truncate relation (Mike Mascari)\nImplement constant-expression simplification (Bernard Frankpitt, Tom)\nFix to give super user and createdb user proper update catalog rights (Peter E)\nAllow more than first column to be used to determine start of index scan\n (Hiroshi)\nAllow ecpg bool variables to have NULL values (Christof)\nIssue ecpg error if NULL value is returned to variable with no NULL\nindicator (Christof)\nAllow ^C to cancel COPY command (Massimo)\nAdd SET FSYNC and SHOW PG_OPTIONS commands(Massimo)\nImprove CREATE FUNCTION to allow type conversion specification \n\t(Bernie Frankpitt)\nAdd CmdTuples() to libpq++(Vince)\nNew CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands(Jan)\nAllow CREATE FUNCTION WITH clause to be used for all language types\nconfigure --enable-debug adds -g (Peter E)\nconfigure --disable-debug removes -g (Peter E)\nAllow more complex default expressions (Tom)\nFirst real FOREIGN KEY constraint trigger functionality (Jan)\nAdd FOREIGN KEY ... REFERENCES ... MATCH FULL (Jan)\nAdd FOREIGN KEY ... MATCH FULL ... ON DELETE CASCADE (Jan)\nAllow WHERE restriction on ctid (physical heap location) (Hiroshi)\nMove pginterface from contrib to interface directory, rename to pgeasy (Bruce)\nAdd DEC and SESSION_USER as reserved words\nPrevent quadruple use of disk space when doing internal sorting (Tom)\nRequire SELECT DISTINCT target list to have all ORDER BY columns (Tom)\nAdd Oracle's COMMENT ON command (Mike Mascari <mascarim@yahoo.\nlibpq's PQsetNoticeProcessor function now returns previous hook(Peter E)\nPrevent PQsetNoticeProcessor from being set to NULL (Peter E)\nMake USING in COPY optional (Bruce)\nFaster sorting by calling fewer functions (Tom)\nCreate system indexes to match all system caches(Bruce, Hiroshi)\nMake system caches use system indexes(Bruce)\nMake all system indexes unique(Bruce)\nAllow subselects in the target list (Tom)\nAllow subselects on the left side of comparison operators (Tom)\nNew parallel regression test (Jan)\nChange backend-side COPY to write files with permissions 644 not 666 (Tom)\nForce permissions on PGDATA directory to be secure, even if it exists (Tom)\nAdded psql LastOid variable to return last inserted oid (Peter E)\nImprove pg_statistics management for VACUUM speed improvement (Tom)\nAllow concurrent vacuum and remove pg_vlock vacuum lock file (Tom)\nAdd permissions check so only Postgres superuser or table owner can\nvacuum (Peter E)\nNew C-routines to implement a BIT and BIT VARYING type in /contrib \n\t(Adriaan Joubert)\nNew Oracle compatible DateTime routines TO_CHAR(), TO_DATE() and \n\tFROM_CHAR() (Karel)\nNew libpq functions to allow asynchronous connections: PQconnectStart(), \n PQconnectPoll(), PQresetStart(), PQresetPoll(), PQsetenvStart(), \n PQsetenvPoll(), PQsetenvAbort (Ewan Mellor)\nNew libpq PQsetenv() function (Ewan Mellor)\ncreate/alter user extension (Peter E)\nNew postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)\nNew scripts for create/drop user/db (Peter E)\nMajor psql overhaul(Peter E)\nAdd const to libpq interface(Peter E)\nNew libpq function PQoidValue (Peter E)\nShow specific non-aggregate causing problem with GROUP BY (Tom)\nForce changes to pg_shadow recreate pg_pwd file (Peter E)\nAdd aggregate(DISTINCT ...) (Tom)\nAllow flag to control COPY input/output of NULLs (Peter E)\nMake postgres user have a password by default (Peter E)\nAdd CREATE/ALTER/DROP GROUP (Peter E)\nAll administration scripts now support --long options (Peter E, Karel)\nVacuumdb script now supports --alldb option (Peter E)\necpg new portable FETCH syntax\nAdd ecpg EXEC SQL IFDEF, EXEC SQL IFNDEF, EXEC SQL ELSE, EXEC SQL ELIF \n\tand EXEC SQL ENDIF directives\nAdd pg_ctl script to control backend startup (Tatsuo)\nAdd postmaster.opts.default file to store startup flags (Tatsuo)\nAllow --with-mb=SQL_ASCII\nIncrease maximum number of index keys to 16 (Bruce)\nIncrease maximum number of function arguments to 16 (Bruce)\nAllow user configuration of maximum number of index keys and arguments\n(Bruce)\nFlush backend cache less frequently (Tom)\nAllow unprivileged users change their own passwords (Peter E)\nWith password authentication enabled, new users without passwords can't\nconnect (Peter E)\nDisallow dropping a user who owns a database (Peter E)\nAdd initdb --enable-multibyte option (Peter E)\nAdd option for initdb to prompts for superuser password (Peter E)\nCOPY now reuses previous memory allocation, improving performance (Tom)\nAllow complex type casts like col::numeric(9,2) and col::int2::float8 (Tom)\nUpdated user interfaces on initdb, initlocation, pg_dump, ipcclean\n(Peter E)\nNUMERIC now accepts scientific notation (Tom)\nNUMERIC to int4 rounds (Tom)\nConvert float4/8 to NUMERIC properly (Tom)\nNew pg_char_to_encoding() and pg_encoding_to_char() functions\nLibpq non-blocking mode (Alfred Perlstein)\nImprove conversion of types in casts that don't specify a length\nNew plperl internal programming language (Mark Hollomon)\nAllow COPY IN to read file that do not end with a newline (Tom)\nImprove optimization cost estimation (Tom)\nIndicate when long identifiers are truncated (Tom)\nImprove optimizer estimate of range queries x > lowbound AND x < highbound (Tom)\nAllow aggregates to use type equivalency (Peter E)\nAdd Oracle's to_char(), to_date(), to_datetime(), to_timestamp(), to_number()\n\tconversion functions (Karel Zak <[email protected]>)\nAdd SELECT DISTINCT ON (expr [, expr ...]) targetlist ... (Tom)\nCheck to be sure ORDER BY is compatible with the DISTINCT operation (Tom)\nUse DNF instead of CNF where appropriate (Tom, Taral)\nAdd NUMERIC and int8 types to ODBC\nImprove EXPLAIN results for Append, Group, Agg, Unique (Tom)\nAdded ALTER TABLE ... ADD CONSTRAINT (Stephan Szabo)\nFurther cleanup for OR-of-AND WHERE-clauses (Tom)\nMake use of index in OR clauses (x = 1 AND y = 2) OR (x = 2 AND y = 4) (Tom)\nAllow SELECT .. FOR UPDATE in PL/pgSQL\nEnable backward sequential scan even after reaching EOF\nAdd btree indexing of boolean values (Don Baccus)\nPrint current line number when COPY FROM fails (Massimo)\nRecognize special case of POSIX time zone: \"GMT+8\" and \"GMT-8\" (Thomas)\nAdd \"DEC\" as synonym for \"DECIMAL (Thomas)\nAdd \"SESSION_USER\" as SQL92 keyword, same as CURRENT_USER (Thomas)\nImplement column aliases (aka correlation names) and more join syntax\n(Thomas)\nAllow queries like SELECT a FROM t1 tx (a) (Thomas)\nAllow queries like SELECT * FROM t1 NATURAL JOIN t2 (Thomas)\nSmarter optimizer computations for random index page access (Tom)\nNew SET variable to control optimizer costs (Tom)\nOptimizer queries based on LIMIT, OFFSET, and EXISTS qualifications (Tom)\nReduce optimizer internal housekeeping of join paths for speedup (Tom)\nMake \"INTERVAL\" reserved word allowed as a column identifier (Thomas)\nAllow type conversion with NUMERIC (Thomas)\nMake ISO date style (2000-02-16 09:33) the default (Thomas)\nImplement REINDEX command\nAccept ALL in aggregate function SUM(ALL col) (Tom)\nPrevent GROUP BY from using column aliases (Tom)\nNew psql \\encoding option (Tatsuo)\nAllow PQrequestCancel() to terminate when in waiting-for-lock state (Jan)\nAllow negation of a negative number in all cases\n\nSource Tree Changes\n-------------------\nFix for linux PPC compile\nNew generic expression-tree-walker subroutine (Tom)\nChange form() to varargform() to prevent portability problems.\nImproved range checking for large integers on Alpha's\nClean up #include in /include directory (Bruce)\nAdd scripts for checking includes (Bruce)\nRemove un-needed #include's from *.c files (Bruce)\nChange #include's to use <> and \"\" as appropriate (Bruce)\nEnable WIN32 compilation of libpq\nAlpha spinlock fix from Uncle George <[email protected]>\nOverhaul of optimizer data structures (Tom)\nFix to cygipc library (Yutaka Tanida)\nAllow pgsql to work on newer Cygwin snapshots(Dan)\nNew catalog version number (Tom)\nAdd Linux ARM.\nRename heap_replace to heap_update\nUpdate for QNX (Kardos, Dr. Andrea)\nNew platform-specific regression handling (Tom)\nRename oid8 -> oidvector and int28 -> int2vector (Bruce)\nIncluded all yacc and lex files into the distribution (Peter E.)\nRemove lextest, no longer needed (Peter E)\nFix for libpq and psql on Win32 (Magnus)\nInternally change datetime and timespan into timestamp and interval (Thomas)",
"msg_date": "Wed, 23 Feb 2000 15:12:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changes in 7.0"
},
{
"msg_contents": "On Wed, 23 Feb 2000, Bruce Momjian wrote:\n\n> Here is my list of 7.0 changes. Please let me know of any changes I\n> should make to it.\n\n> Allow ^C to cancel COPY command (Massimo)\n\nThat's cool, but if you look closely, psql doesn't do that (anymore). :(\nIs it safe to send PQcancelRequest in a copy state and then just forget\nabout it? What's the correct behaviour? With everyone requesting longjmp's\nat the last minute, I had to disable ^C during COPY.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 14:04:52 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes in 7.0"
},
{
"msg_contents": "> On Wed, 23 Feb 2000, Bruce Momjian wrote:\n> \n> > Here is my list of 7.0 changes. Please let me know of any changes I\n> > should make to it.\n> \n> > Allow ^C to cancel COPY command (Massimo)\n> \n> That's cool, but if you look closely, psql doesn't do that (anymore). :(\n> Is it safe to send PQcancelRequest in a copy state and then just forget\n> about it? What's the correct behaviour? With everyone requesting longjmp's\n> at the last minute, I had to disable ^C during COPY.\n\nI assume it was during COPY and not \\copy.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 Feb 2000 09:23:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Changes in 7.0"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Allow ^C to cancel COPY command (Massimo)\n\n> That's cool, but if you look closely, psql doesn't do that (anymore). :(\n> Is it safe to send PQcancelRequest in a copy state and then just forget\n> about it? What's the correct behaviour?\n\nFor a COPY OUT (from the backend), the correct behavior is same as for\nnon-copy state: fire off the cancel request and then forget about it.\nIf the backend decides to honor the request then it will terminate the\ncopy in the usual way. For a COPY IN, it's up to you to stop sending\ndata...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 10:42:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes in 7.0 "
},
{
"msg_contents": "On Thu, 24 Feb 2000, Tom Lane wrote:\n\n> For a COPY OUT (from the backend), the correct behavior is same as for\n> non-copy state: fire off the cancel request and then forget about it.\n\nDo I have to call PQendcopy() is the question.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 24 Feb 2000 17:01:51 +0100 (MET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes in 7.0 "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Thu, 24 Feb 2000, Tom Lane wrote:\n>> For a COPY OUT (from the backend), the correct behavior is same as for\n>> non-copy state: fire off the cancel request and then forget about it.\n\n> Do I have to call PQendcopy() is the question.\n\nYes, but only after the backend sends the usual copy termination\nmessage. The cancel request doesn't affect the protocol state machine\nnor the app's interaction with libpq in the slightest. It's just a side\ncommunication to the backend (\"Psst! I'd really appreciate it if we\ncould wrap this up sooner rather than later.\")\n\nFor COPY IN, you want to stop sending data lines and send a terminator,\nthen PQendcopy() in the usual way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 11:17:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes in 7.0 "
},
{
"msg_contents": "Tom Lane writes:\n\n> For COPY IN, you want to stop sending data lines and send a terminator,\n> then PQendcopy() in the usual way.\n\nThat's trickier than it sounds. If I simply do a longjmp from the signal\nhandler and do the clean up at the setjmp destination I have no idea what\nthe state of the output buffer is. Worse yet, PQputline doesn't seem to\ncope so well with longjmps. The second alternative is to set a flag in the\nsignal handler and have handleCopyIn() check that once in a while. But\nthat leads to some non-obvious behaviour if I'm entering copy data by\nhand, such as ^C only taking effect after I press enter, and/or an extra\nzero (default) row being inserted. The way it currently looks I can't\nguarantee any consistent state either way. The proper solution would\nseemingly be to write separate handlers for interactive and file\ninput. I'll keep that in mind for next time.\n\nFor now I could only offer the hard exit in script mode and letting people\nenter their own \"\\.\" or ^D in interactive mode (i.e., ignore ^C in that\ncase).\n\nMeanwhile, ^C during COPY OUT seems back on track.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 26 Feb 2000 02:36:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "^C in psql (was Re: [HACKERS] Changes in 7.0)"
},
{
"msg_contents": "> Here is my list of 7.0 changes. Please let me know of any changes I\n> should make to it.\n\n>New pg_char_to_encoding() and pg_encoding_to_char() functions\n\ndone by me.\n\nAlso, can you add followings:\n\nNew libpq functions PQsetClientEncoding(), PQclientEncoding()\nAdd support for SJIS user defined characters\nAdd SQL_ASCII test case to the regression test\n--with-mb now deprecated\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 27 Feb 2000 20:00:53 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes in 7.0"
},
{
"msg_contents": "Done.\n\n> > Here is my list of 7.0 changes. Please let me know of any changes I\n> > should make to it.\n> \n> >New pg_char_to_encoding() and pg_encoding_to_char() functions\n> \n> done by me.\n> \n> Also, can you add followings:\n> \n> New libpq functions PQsetClientEncoding(), PQclientEncoding()\n> Add support for SJIS user defined characters\n> Add SQL_ASCII test case to the regression test\n> --with-mb now deprecated\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 27 Feb 2000 09:48:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Changes in 7.0"
}
] |
[
{
"msg_contents": ">> - You cannot select the top N rows according to criterion A ordering\n>> the results with a different criterion B.\n\n> True, but I don't see how to do that with one indexscan (for that\n> matter, I don't even see how to express it in the SQL subset that\n> we support...)\n\n...That's why we proposed this syntax extension:\n\nSELECT\n.\n.\nSTOP AFTER <N> (we changed the name, but this is the LIMIT)\nRANK BY <A>\nORDER BY <B>\n\nHere you can select the best <N> rows according to <A> and then order the results on <B>. \nWe note that, not accounting for a similar extension, you could do the same thing only using a subselect (with an ORDER BY clause in the inner select, that is non-standard as well).\n\n\n>> - If you ask for the best 10 rows, from a relation including \n>> 100000 rows, you have to do a traditional sort on 100000 rows and\n>> then retain only the first 10, doing more comparisons than requested.\n\n> Not if there's an index that implements the ordering --- and if there\n> is not, I don't see how to avoid the sort anyway.\n\nOf course, if you have an index there is no problem. \nIt is even true that if you don't have an index there is no way to avoid the sort, but in that case we use a specialized sort, which does much less comparisons. \nFor example, if you want the 10 best rows from 100000, these are the average numbers of comparisons:\n\nQuickSort: 1.6E+14\nSortStop: 1.5E+11\n\n>> - You can choose a \"fast-start\" plan (i.e., basically, \n>> a pipelined plan), but you cannot performe an \"early-stop\" of \n>> the stream when you have a \"slow-start\" plan (e.g. involving sorts \n>> or hash tables).\n\n> Why not? The executor *will* stop when it has as many output rows as\n> the LIMIT demands.\n\nYes, but consider this very simple case:\n\nLIMIT 10\n[something else]\n MergeJoin (100000 rows)\n Sort (100000 rows)\n SeqScan on Table1 (100000 rows)\n\n\n IndexScan on Table2 (100 rows)\n\nAssuming that referential constraints allow us to do it, we would do the following:\n\n[something else]\n MergeJoin (10 rows)\n SortStop 10 (10 rows)\n SeqScan on Table1 (100000 rows)\n IndexScan on Table2 (100 rows)\n\nHere, we get only 10 rows from the outer relation. *In general*, this is NOT correct, but referential constraints make it safe in many cases. You can see that in the second approach, the \"[something else]\" will operate with an input stream cardinality of 10, against 100000 of the first approach. This is what we call the \"push-down\" of the Stop operator.\n\n> I'd be the first to admit that the cost model needs some fine-tuning\n> still. It's just a conceptual structure at this point.\n\nWe hope you are not considering our posts as a criticism. We used PostgreSQL as a base to our proposal, finding good results, and now we are wondering if you are interested to continue in this sense.\nBtw, DB2 currently adopts \"LIMIT\" optimization techniques similar to ours. \n\nRegards\n \nRoberto Cornacchia\n\n\n===========================================================\n\nVIRGILIO MAIL - Il tuo indirizzo E-mail gratis e per sempre\nhttp://mail.virgilio.it/\n\n\nVIRGILIO - La guida italiana a Internet\nhttp://www.virgilio.it/\n\u0001\n",
"msg_date": "Wed, 23 Feb 2000 19:10:36 -0500",
"msg_from": "\"Roberto Cornacchia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: about 7.0 LIMIT optimization "
},
{
"msg_contents": "At 07:10 PM 2/23/00 -0500, Roberto Cornacchia wrote:\n\n>Of course, if you have an index there is no problem. \n>It is even true that if you don't have an index there is no way to avoid\nthe sort, but in that case we use a specialized sort, which does much less\ncomparisons. \n>For example, if you want the 10 best rows from 100000, these are the\naverage numbers of comparisons:\n>\n>QuickSort: 1.6E+14\n>SortStop: 1.5E+11\n\nThis makes sense ... you can stop once you can guarantee that the first\nten rows are in proper order. I'm not familiar with the algorithm\nbut not terribly surprised that one exists.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 17:33:15 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: about 7.0 LIMIT optimization "
},
{
"msg_contents": "\"Roberto Cornacchia\" <[email protected]> writes:\n>> Why not? The executor *will* stop when it has as many output rows as\n>> the LIMIT demands.\n\n> Yes, but consider this very simple case:\n\n> LIMIT 10\n> [something else]\n> MergeJoin (100000 rows)\n> Sort (100000 rows)\n> SeqScan on Table1 (100000 rows)\n\n\n> IndexScan on Table2 (100 rows)\n\n> Assuming that referential constraints allow us to do it, we would do the\n> following:\n\n> [something else]\n> MergeJoin (10 rows)\n> SortStop 10 (10 rows)\n> SeqScan on Table1 (100000 rows)\n> IndexScan on Table2 (100 rows)\n\n> Here, we get only 10 rows from the outer relation. *In general*, this is\n> NOT correct, but referential constraints make it safe in many cases. You\n> can see that in the second approach, the \"[something else]\" will operate\n> with an input stream cardinality of 10, against 100000 of the first\n> approach. This is what we call the \"push-down\" of the Stop operator.\n\nIf I understand your point correctly, the existing code arrives at this\nsame effect through another direction: it will choose the right plan for\nthe query when the [something else] node doesn't need to read very many\nrows. This isn't reflected in the EXPLAIN output very well, which might\nbe fooling you as to what's really happening.\n\nI'm not sure about your comment about referential constraints. If you\nare doing analysis of restriction clauses to prove that a particular\nstage doesn't require reading as many rows as it otherwise would, then\nyou've done more than I have.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 21:23:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about 7.0 LIMIT optimization "
},
{
"msg_contents": "Don Baccus <[email protected]> writes:\n>> For example, if you want the 10 best rows from 100000, these are the\n> average numbers of comparisons:\n>> \n>> QuickSort: 1.6E+14\n>> SortStop: 1.5E+11\n\nAre there some zeroes missing here? That sounds like an awful lot of\noperations for a quicksort of only 1E5 elements...\n\n> This makes sense ... you can stop once you can guarantee that the first\n> ten rows are in proper order. I'm not familiar with the algorithm\n> but not terribly surprised that one exists.\n\nThe obvious way to do it would be with a heap-based sort. After you've\nbuilt the heap, you pull out the first ten elements and then stop.\nOffhand this only seems like it'd save about half the work, though,\nso maybe Roberto has a better idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Feb 2000 21:30:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: about 7.0 LIMIT optimization "
},
{
"msg_contents": "At 09:30 PM 2/23/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>>> For example, if you want the 10 best rows from 100000, these are the\n>> average numbers of comparisons:\n>>> \n>>> QuickSort: 1.6E+14\n>>> SortStop: 1.5E+11\n>\n>Are there some zeroes missing here? That sounds like an awful lot of\n>operations for a quicksort of only 1E5 elements...\n\nYeah, obviously one or more of his numbers are wrong. Let's see, a\nbubble sort's only O(n^2), \"only\" 1E10/2 comparisons for 1E5 elements,\nright? Surely O(n*log(n)) is quicker :)\n\n>\n>> This makes sense ... you can stop once you can guarantee that the first\n>> ten rows are in proper order. I'm not familiar with the algorithm\n>> but not terribly surprised that one exists.\n>\n>The obvious way to do it would be with a heap-based sort. After you've\n>built the heap, you pull out the first ten elements and then stop.\n>Offhand this only seems like it'd save about half the work, though,\n>so maybe Roberto has a better idea.\n\nI'd like to see some elaboration.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 23 Feb 2000 18:33:36 -0800",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: about 7.0 LIMIT optimization "
}
] |
[
{
"msg_contents": "\n> > In this sense a commit is not partial. The commit should commit\n> > all statements that were not in error. \n> \n> That interpretation eliminates an absolutely essential capability\n> (all-or-none behavior) in favor of what strikes me as a very minor\n> programming shortcut.\n\nThe all-or-none behavior is what you get if you simply do a rollback\non any error or warning. I don't see a special programming difficulty here.\n\n> \n> > All other DB's behave in this way.\n> \n> I find this hard to believe, and even harder to believe that it's\n> mandated by the standard. What you're essentially claiming is that\n> everyone but us has nested transactions\n\nThey don't necessarily have nested tx, although some have.\nAll they provide is atomicity of single statements.\n\n> (which'd be the only way to\n> roll back a single failed statement inside a transaction) and that\n> SQL92 requires nested transactions --- yet it never uses the \n> phrase nor\n> makes the obvious step to allowing user-specified nested transactions.\n\nYes, but they say \"statement\" when they mention the all-or-none behavior,\nnot transaction.\n\nAndreas\n",
"msg_date": "Thu, 24 Feb 2000 10:04:10 +0100",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] TRANSACTIONS "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> I find this hard to believe, and even harder to believe that it's\n>> mandated by the standard. What you're essentially claiming is that\n>> everyone but us has nested transactions\n\n> They don't necessarily have nested tx, although some have.\n> All they provide is atomicity of single statements.\n\nIf it looks like a duck, walks like a duck, and quacks like a duck,\nit's a duck no matter what it's called. How would you provide atomicity\nof a single statement without a transaction-equivalent implementation?\nThat statement might be affecting many tuples in several different\ntables. It's not noticeably easier to roll back one statement than\na whole sequence of them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Feb 2000 11:39:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] TRANSACTIONS "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.