threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\nHi,\n\nI've just found very suspicious directory entries in\nftp.postgresql.org/pub/.incoming, for sure it's an attempt to exploit some\nsecuirity hole to gain access to your machine or machines mirroring the FTP\nsite. The entries seems to be here for a lot of time, but I didn't seem to see\nany reference about them on the mailing lists.\n\nThere are nested directories that create a pathname with a shell code at the\nend, very suitable to overflow some stack...\n\n/ftp/pub/ftp.postgresql.org/pub/.incoming/������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������������������������/���������������������\n������������������������������������������������������������������������������������������������������������������������������������\n�����������������������������������������������/������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������������������������������/���������������\n������������������������������������������������������������������������������������������������������������������������������������\n���������������/1�1Û°Í1��Í1�1Û°.Í�O1�1�^�'�^�ű�Í1��^�=Í1��������1ɱVÎ����^�=�^Í1��F��F^L���V^LÍ����/bin/sh\n\nEntries have been last modified (on my server) at this time:\n\ndrwxr-xr-x 3 ftp ftp 1024 Jul 28 20:37\n?????????????????????????????????????????????????????????????????????????????\n???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????\n\nPlease, delete the entries as soon as possible, but be careful that if the\nexploitable hole is in rm or mc (or whatever tool you intend to use to delete\nthem), you could activate the exploit.\n\nA small look at the BugTRAQ archives should help you finding what tool has the\nhole these entries are made to exploit.\n\nPheraps the incoming dir should be monitored a little more .\n\nBye!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n",
"msg_date": "Tue, 24 Aug 1999 00:45:34 +0200",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Attempt to crack ftp site"
},
{
"msg_contents": "\nHi Daniele...\n\n\tI just checked the main repository, and no such file exists\nthere...my guess is that this is specific to your server?\n\n\nOn Tue, 24 Aug 1999, Daniele Orlandi wrote:\n\n> \n> Hi,\n> \n> I've just found very suspicious directory entries in\n> ftp.postgresql.org/pub/.incoming, for sure it's an attempt to exploit some\n> secuirity hole to gain access to your machine or machines mirroring the FTP\n> site. The entries seems to be here for a lot of time, but I didn't seem to see\n> any reference about them on the mailing lists.\n> \n> There are nested directories that create a pathname with a shell code at the\n> end, very suitable to overflow some stack...\n> \n> /ftp/pub/ftp.postgresql.org/pub/.incoming/������������������������������������������������������������������������������������������\n> ��������������������������������������������������������������������������������������������������������������/���������������������\n> ������������������������������������������������������������������������������������������������������������������������������������\n> �����������������������������������������������/������������������������������������������������������������������������������������\n> ��������������������������������������������������������������������������������������������������������������������/���������������\n> ������������������������������������������������������������������������������������������������������������������������������������\n> ���������������/1�1۰̀1��̀1�1۰.̀�O1�1�^�'�^�ű�̀1��^�=̀1��������1ɱVΉ����^�=�^̀1��F��F^L���V^L̀����/bin/sh\n> \n> Entries have been last modified (on my server) at this time:\n> \n> drwxr-xr-x 3 ftp ftp 1024 Jul 28 20:37\n> ?????????????????????????????????????????????????????????????????????????????\n> ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????\n> \n> Please, delete the entries as soon as possible, but be careful that if the\n> exploitable hole is in rm or mc (or whatever tool you intend to use to delete\n> them), you could activate the exploit.\n> \n> A small look at the BugTRAQ archives should help you finding what tool has the\n> hole these entries are made to exploit.\n> \n> Pheraps the incoming dir should be monitored a little more .\n> \n> Bye!\n> \n> -- \n> Daniele\n> \n> -------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> -------------------------------------------------------------------------------\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 23 Aug 1999 20:20:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [MIRRORS] Attempt to crack ftp site"
}
] |
[
{
"msg_contents": "Hi,\n\nShot, Leon. The patch removes the #define YY_USES_REJECT from scan.c, which\nmeans we now have expandable tokens. Of course, it also removes the\nscanning of \"embedded minuses\", which apparently causes the optimizer to\nunoptimize a little. However, the next step is attacking the limit on the\nsize of string literals. These seemed to be wired to YY_BUF_SIZE, or\nsomething. Is there any reason for this?\n\n\nMikeA\n\n",
"msg_date": "Tue, 24 Aug 1999 10:20:31 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lex and things..."
},
{
"msg_contents": "Ansley, Michael wrote:\n> \n> Hi,\n> \n> Shot, Leon. The patch removes the #define YY_USES_REJECT from scan.c, which\n> means we now have expandable tokens. Of course, it also removes the\n> scanning of \"embedded minuses\", which apparently causes the optimizer to\n> unoptimize a little. \n\nOh, no. Unary minus gets to grammar parser and there is recognized as\nsuch. Then for numeric constants it becomes an *embedded* minus in\nfunction doNegate. So unary minus after parser in numeric constants\nis embedded minus, as it was earlier before patch. In other words,\nI can see no change in representation of grammar after patching.\n\n> However, the next step is attacking the limit on the\n> size of string literals. These seemed to be wired to YY_BUF_SIZE, or\n> something. Is there any reason for this?\n\nHmm. There is something going on to remove fixed length limits \nentirely, maybe someone is already doing something to lexer in\nthat respect? If no, I could look at what can be done there.\n\n-- \nLeon.\n\n",
"msg_date": "Tue, 24 Aug 1999 15:16:00 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": "\n> Hmm,Index scan is chosen to select all rows.\n> AFAIK,sequential scan + sort is much faster than index scan in\n> most cases.\n> \n> \tcost of index scan < cost of sequential scan + cost of sort\n> \nThis is usually true. It might need resources though that are not available,\ne.g. 8 GB sort space. It also depends on whether the application is\ninterested in\nfirst row (interactive), or all row performance (batch). Other DB's can\nswitch modes \nto decide on the wanted behavior. So I think there is no yes/no decision on\nthis.\n\nAndreas\n",
"msg_date": "Tue, 24 Aug 1999 11:48:05 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] Caution: tonight's commits force initdb"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Zeugswetter\n> Andreas IZ5\n> Sent: Tuesday, August 24, 1999 6:48 PM\n> To: pgsql-hackers\n> Subject: AW: [HACKERS] Caution: tonight's commits force initdb\n> \n> \n> \n> > Hmm,Index scan is chosen to select all rows.\n> > AFAIK,sequential scan + sort is much faster than index scan in\n> > most cases.\n> > \n> > \tcost of index scan < cost of sequential scan + cost of sort\n> > \n> This is usually true. It might need resources though that are not \n> available,\n\nWithout taking SORT into account\n\n\t[From my example]\n\n\tcost of sequential scan = 1716.32 and\n\tcost of index scan = 2284.55\n\n\tcost of sequential scan > cost of index scan * 0.7\n\nIt's unbelievable for me.\n\n> e.g. 8 GB sort space. It also depends on whether the application is\n> interested in\n> first row (interactive), or all row performance (batch). Other DB's can\n> switch modes \n> to decide on the wanted behavior. So I think there is no yes/no \n> decision on\n> this.\n>\n\nWe could use LIMIT clause to get first rows now and optimizer\nshould take LIMIT/OFFSET into account(TODO item).\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n\n",
"msg_date": "Tue, 24 Aug 1999 19:39:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Caution: tonight's commits force initdb"
},
{
"msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> > Hmm,Index scan is chosen to select all rows.\n> > AFAIK,sequential scan + sort is much faster than index scan in\n> > most cases.\n> >\n> > cost of index scan < cost of sequential scan + cost of sort\n> >\n> This is usually true. It might need resources though that are not available,\n> e.g. 8 GB sort space. It also depends on whether the application is\n> interested in\n> first row (interactive), or all row performance (batch). Other DB's can\n> switch modes\n> to decide on the wanted behavior. So I think there is no yes/no decision on\n> this.\n\nI feel the decision should be based on all resources required including\nCPU, Memory, and I/O by both the server and all clients. In my experience the\nindex scan *always* comes out on top on average for small, medium and large\nresult sets with single row fetch. Now if only we can get postgres to support \nsingle row fetch without having to use transactions and cursors... then I \nbelieve that postgres could give Informix and Oracle a serious run for \ntheir money.\n\n--------\nRegards\nTheo\n",
"msg_date": "Tue, 24 Aug 1999 17:30:22 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Caution: tonight's commits force initdb"
}
] |
[
{
"msg_contents": ">> > Shot, Leon. The patch removes the #define YY_USES_REJECT from scan.c,\nwhich\n>> > means we now have expandable tokens. Of course, it also removes the\n>> > scanning of \"embedded minuses\", which apparently causes the optimizer\nto\n>> > unoptimize a little. \n>> \n>> Oh, no. Unary minus gets to grammar parser and there is recognized as\n>> such. Then for numeric constants it becomes an *embedded* minus in\n>> function doNegate. So unary minus after parser in numeric constants\n>> is embedded minus, as it was earlier before patch. In other words,\n>> I can see no change in representation of grammar after patching.\nGreat.\n>> \n>> > However, the next step is attacking the limit on the\n>> > size of string literals. These seemed to be wired to YY_BUF_SIZE, or\n>> > something. Is there any reason for this?\n>> \n>> Hmm. There is something going on to remove fixed length limits \n>> entirely, maybe someone is already doing something to lexer in\n>> that respect? If no, I could look at what can be done there.\nYes, me. I've removed the query string limit from psql, libpq, and as much\nof the backend as I can see. I have done some (very) preliminary testing,\nand managed to get a 95kB query to execute. However, the two remaining\nproblems that I have run into so far are token size (which you have just\nremoved, many thanks ;-), and string literals, which are limited, it seems\nto YY_BUF_SIZE (I think).\n\nYou see, if I can get the query string limited removed, perhaps someone who\nknows a bit more than I do will do something like, hmmm, say, remove the\nblock size limit from tuple size... hint, hint... anybody...\n\nMikeA\n\n\n>> \n>> -- \n>> Leon.\n>> \n",
"msg_date": "Tue, 24 Aug 1999 12:56:33 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Lex and things..."
},
{
"msg_contents": "Ansley, Michael wrote:\n\n> >> Hmm. There is something going on to remove fixed length limits\n> >> entirely, maybe someone is already doing something to lexer in\n> >> that respect? If no, I could look at what can be done there.\n> Yes, me. I've removed the query string limit from psql, libpq, and as much\n> of the backend as I can see. I have done some (very) preliminary testing,\n> and managed to get a 95kB query to execute. However, the two remaining\n> problems that I have run into so far are token size (which you have just\n> removed, many thanks ;-), \n\nI'm afraid not. There is arbitrary limit (named NAMEDATALEN) in lexer.\nIf identifier exeeds it, it gets '\\0' at that limit, so truncated\neffectively. Strings are also limited by MAX_PARSE_BUFFER which is\nfinally something like QUERY_BUF_SIZE = 8k*2.\n\nSeems that string literals are the primary target, because it is\nreal-life constraint here now. This is not the case with supposed\nhuge identifiers. Should I work on it, or will you do it yourself?\n\n> and string literals, which are limited, it seems\n> to YY_BUF_SIZE (I think).\n\n-- \nLeon.\n\n",
"msg_date": "Tue, 24 Aug 1999 17:27:00 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
},
{
"msg_contents": "> I'm afraid not. There is arbitrary limit (named NAMEDATALEN) in lexer.\n> If identifier exeeds it, it gets '\\0' at that limit, so truncated\n> effectively. Strings are also limited by MAX_PARSE_BUFFER which is\n> finally something like QUERY_BUF_SIZE = 8k*2.\n\nI think NAMEDATALEN referes to the size of a NAME field in the database,\nwhich is used to store attribute names etc. So you cannot exceed\nNAMEDATALEN, or the identifier won't fit into the system tables.\n\nAdriaan\n",
"msg_date": "Tue, 24 Aug 1999 16:35:02 +0300",
"msg_from": "Adriaan Joubert <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
},
{
"msg_contents": "Adriaan Joubert wrote:\n\n> I think NAMEDATALEN referes to the size of a NAME field in the database,\n> which is used to store attribute names etc. So you cannot exceed\n> NAMEDATALEN, or the identifier won't fit into the system tables.\n\nOk. Let's leave identifiers alone.\n\n-- \nLeon.\n\n",
"msg_date": "Tue, 24 Aug 1999 19:09:56 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": "Sorry, I forgot to mention in the previous mail: I sent patches to the\npatches mailing list (available from the web server), which patch psql, and\nlibpq, and scan.l (except for you patch). The were sent at the beginning of\nthis month, so maybe get them, and see how they work for you.\n\n>> -----Original Message-----\n>> From: Ansley, Michael [mailto:[email protected]]\n>> Sent: Tuesday, August 24, 1999 12:57 PM\n>> To: 'Leon'; '[email protected]'\n>> Subject: RE: [HACKERS] Lex and things...\n>> \n>> \n>> >> > Shot, Leon. The patch removes the #define \n>> YY_USES_REJECT from scan.c,\n>> which\n>> >> > means we now have expandable tokens. Of course, it \n>> also removes the\n>> >> > scanning of \"embedded minuses\", which apparently causes \n>> the optimizer\n>> to\n>> >> > unoptimize a little. \n>> >> \n>> >> Oh, no. Unary minus gets to grammar parser and there is \n>> recognized as\n>> >> such. Then for numeric constants it becomes an *embedded* minus in\n>> >> function doNegate. So unary minus after parser in numeric \n>> constants\n>> >> is embedded minus, as it was earlier before patch. In other words,\n>> >> I can see no change in representation of grammar after patching.\n>> Great.\n>> >> \n>> >> > However, the next step is attacking the limit on the\n>> >> > size of string literals. These seemed to be wired to \n>> YY_BUF_SIZE, or\n>> >> > something. Is there any reason for this?\n>> >> \n>> >> Hmm. There is something going on to remove fixed length limits \n>> >> entirely, maybe someone is already doing something to lexer in\n>> >> that respect? If no, I could look at what can be done there.\n>> Yes, me. I've removed the query string limit from psql, \n>> libpq, and as much\n>> of the backend as I can see. I have done some (very) \n>> preliminary testing,\n>> and managed to get a 95kB query to execute. However, the \n>> two remaining\n>> problems that I have run into so far are token size (which \n>> you have just\n>> removed, many thanks ;-), and string literals, which are \n>> limited, it seems\n>> to YY_BUF_SIZE (I think).\n>> \n>> You see, if I can get the query string limited removed, \n>> perhaps someone who\n>> knows a bit more than I do will do something like, hmmm, \n>> say, remove the\n>> block size limit from tuple size... hint, hint... anybody...\n>> \n>> MikeA\n>> \n>> \n>> >> \n>> >> -- \n>> >> Leon.\n>> >> \n>> \n>> ************\n>> \n",
"msg_date": "Tue, 24 Aug 1999 13:14:40 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Lex and things..."
},
{
"msg_contents": "Ansley, Michael wrote:\n> \n> Sorry, I forgot to mention in the previous mail: I sent patches to the\n> patches mailing list (available from the web server), which patch psql, and\n> libpq, and scan.l (except for you patch). The were sent at the beginning of\n> this month, so maybe get them, and see how they work for you.\n\nHmm. This is beta - testing? I'm afraid there isn't much resources\nwith me for it (time, experience etc.). What can I do now is make\nle-e-etlle changes (improvements, I hope) to the code :)\n-- \nLeon.\n\n\n",
"msg_date": "Tue, 24 Aug 1999 17:16:44 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": "As far as I understand it, the MAX_PARSE_BUFFER limit only applies if char\nparsestring[] is used, not if char *parsestring is used. This is the whole\nreason for using flex. And scan.l is set up to compile using char\n*parsestring, not char parsestring[].\n\nThe NAMEDATALEN limit is imposed by the db structure, and is the limit of an\nidentifier. Because this is not actual data, I'm not too concerned with\nthis at the moment. As long as we can get pretty much unlimited data into\nthe tuples, I don't care what I have to call my tables, views, procedures,\netc.\n\n>> \n>> Ansley, Michael wrote:\n>> \n>> > >> Hmm. There is something going on to remove fixed length limits\n>> > >> entirely, maybe someone is already doing something to lexer in\n>> > >> that respect? If no, I could look at what can be done there.\n>> > Yes, me. I've removed the query string limit from psql, libpq, and as\nmuch\n>> > of the backend as I can see. I have done some (very) preliminary\ntesting,\n>> > and managed to get a 95kB query to execute. However, the two remaining\n>> > problems that I have run into so far are token size (which you have\njust\n>> > removed, many thanks ;-), \n>> \n>> I'm afraid not. There is arbitrary limit (named NAMEDATALEN) \n>> in lexer.\n>> If identifier exeeds it, it gets '\\0' at that limit, so truncated\n>> effectively. Strings are also limited by MAX_PARSE_BUFFER which is\n>> finally something like QUERY_BUF_SIZE = 8k*2.\n>> \n>> Seems that string literals are the primary target, because it is\n>> real-life constraint here now. This is not the case with supposed\n>> huge identifiers. Should I work on it, or will you do it yourself?\n>> \n>> > and string literals, which are limited, it seems\n>> > to YY_BUF_SIZE (I think).\n>> \n>> -- \n>> Leon.\n>> \n",
"msg_date": "Tue, 24 Aug 1999 15:13:15 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Lex and things..."
},
{
"msg_contents": "Ansley, Michael wrote:\n> \n> As far as I understand it, the MAX_PARSE_BUFFER limit only applies if char\n> parsestring[] is used, not if char *parsestring is used. This is the whole\n> reason for using flex. And scan.l is set up to compile using char\n> *parsestring, not char parsestring[].\n> \n\nWhat is defined explicitly:\n\n#ifdef YY_READ_BUF_SIZE\n#undef YY_READ_BUF_SIZE\n#endif\n#define YY_READ_BUF_SIZE\tMAX_PARSE_BUFFER\n\n(these strings are repeated twice :)\n\n...\nchar literal[MAX_PARSE_BUFFER];\n\n...\n<xq>{xqliteral} {\n\t\t\t\t\tif ((llen+yyleng) > (MAX_PARSE_BUFFER - 1))\n\t\t\t\t\t\telog(ERROR,\"quoted string parse buffer of %d chars\nexceeded\",MAX_PARSE_BUFFER);\n\t\t\t\t\tmemcpy(literal+llen, yytext, yyleng+1);\n\t\t\t\t\tllen += yyleng;\n\t\t\t\t}\n\nSeems that limits are everywhere ;)\n\n-- \nLeon.\n\n\n",
"msg_date": "Tue, 24 Aug 1999 19:09:06 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": "Adam Ulmer <[email protected]> writes:\n>> can anyone tell me where I need to look so that I can calculate the\n>> amount of shmem req'd by postmaster and N backends?\n\nEasiest way is to try it and see ;-). AFAIR the space allocated per\nbackend is miniscule compared to the space per disk buffer, so you could\nuse \"8K per buffer plus some constant\" as a good first approximation.\nIf you want to know what the delta per backend is, then try a few\ndifferent -N values with fixed -B and look at what ipcs says...\n\n> different invocations of postmaster with -N options set at 16, 32, and\n> 1024 (which I gather means -B options of 32, 64, and 2048).\n\nWe require a *minimum* of 2 buffers per backend --- if you have too\nfew buffers then you'd lose performance due to contention for buffers.\nMore is probably a good idea. The ideal function is probably some fixed\nnumber like a few dozen (to cache the system tables) plus X per backend,\nwhere I suspect X should be more like 5 to 10. But I don't know that\nanyone has really tried to measure what a good choice is for -B versus\n-N. I've cc'd this to pghackers in case anyone there has results to\nshare.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Aug 1999 16:36:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory requirements question "
}
] |
[
{
"msg_contents": "PLEASE does anybody know any sopution how to avoid patholigical grought of\npg_log in 6.5.1.\nI am intensively using begin transactin - end transaction and every 15\nminutes regenerating some tables inside transaction block.\nBut 1 GB of bloat in 10 days is unacceptable.\n\nThank for ANY response.\n\nAre you considering it as a bug or is this only feature?\n\nIs there any chance when I downgrade to 6.4. that the behaviour changes?\n \nRichard Bouska\[email protected]\n\n",
"msg_date": "Wed, 25 Aug 1999 08:36:10 +0200 (CEST)",
"msg_from": "Richard Bouska <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_log 100MB per day !!! on 5MB of data"
}
] |
[
{
"msg_contents": "Yes, I'll go with that.\n\n>> \n>> Adriaan Joubert wrote:\n>> \n>> > I think NAMEDATALEN referes to the size of a NAME field in \n>> the database,\n>> > which is used to store attribute names etc. So you cannot exceed\n>> > NAMEDATALEN, or the identifier won't fit into the system tables.\n>> \n>> Ok. Let's leave identifiers alone.\n>> \n>> -- \n>> Leon.\n>> \n>> \n>> ************\n>> \n",
"msg_date": "Wed, 25 Aug 1999 09:45:47 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": ">> \n>> Ansley, Michael wrote:\n>> > \n>> > As far as I understand it, the MAX_PARSE_BUFFER limit only applies if\nchar\n>> > parsestring[] is used, not if char *parsestring is used. This is the\nwhole\n>> > reason for using flex. And scan.l is set up to compile using char\n>> > *parsestring, not char parsestring[].\n>> > \n>> \n>> What is defined explicitly:\n>> \n>> #ifdef YY_READ_BUF_SIZE\n>> #undef YY_READ_BUF_SIZE\n>> #endif\n>> #define YY_READ_BUF_SIZE\tMAX_PARSE_BUFFER\n>> \n>> (these strings are repeated twice :)\nI noticed that, but hey, who am I to argue.\n\n>> \n>> ...\n>> char literal[MAX_PARSE_BUFFER];\n>> \n>> ...\n>> <xq>{xqliteral} {\n>> \t\t\t\t\tif ((llen+yyleng) > \n>> (MAX_PARSE_BUFFER - 1))\n>> \t\t\t\t\t\t\n>> elog(ERROR,\"quoted string parse buffer of %d chars\n>> exceeded\",MAX_PARSE_BUFFER);\n>> \t\t\t\t\tmemcpy(literal+llen, \n>> yytext, yyleng+1);\n>> \t\t\t\t\tllen += yyleng;\n>> \t\t\t\t}\n>> \n>> Seems that limits are everywhere ;)\n>> \n>> -- \n>> Leon.\nI think we can turn literal into a char *, if we change the code for\n<xq>{xqliteral}. This doesn't look like it will be too much of a mission,\nbut the outer limit is going to be close to the block size, because tuples\ncan't expand past the end of a block. I think that it would be wise to\nleave this limit in place until such time as the tuple size limit is fixed.\nThen we can remove it.\n\nSo, for the moment, I think we can consider the job pretty much done, apart\nfrom bug-fixes. We can revisit the MAX_PARSE_BUFFER limit when tuple size\nis delinked from block size. My aim with this work was to remove the\ngeneral limit on the length of a query string, and that has basically been\nachieved. We have, as a result of the work, come across other limits, but\nthose have dependencies, and will have to wait.\n\n\n\nCheers...\n\n\nMikeA\n",
"msg_date": "Wed, 25 Aug 1999 09:55:15 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Lex and things..."
}
] |
[
{
"msg_contents": "I attach the patch for Statement Triggers (StmtTrig) in postgreSQL.\nStatement Triggers are executed only once for the command use in the\ndefinition not influenced by the number of tuples affected. With this path\nis valid the sentence\n\n \"CREATE TRIGGER disp1 BEFORE INSERT ON tbtest FOR EACH STATEMENT EXECUTE\nPROCEDURE FUNTEST();\"\n\nThanks.\n\n Notes of use:\n\n Keep in mind that when creating a StmtTrig the functions executed get the\ntuples (NEW/OLD if PL, tg_trigtuple and tg_newtuple in C) set to NULL.\n\nIf there are statement and row triggers defined for the same table and the\nsame event:\na) if the event it�s before then it�s executed statement prior to any row\ntrigger\nb) if the event it�s afte then are executed all row prior to statement\ntrigger\n\nTODO triggers list:\n ->Include order to triggers following the recomendations of SQL3.\n ->Modify PL/SQL to access NEW/OLD table.",
"msg_date": "Wed, 25 Aug 1999 13:28:22 +0200",
"msg_from": "\"F.J. Cuberos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PATCH for Statement Triggers Support"
}
] |
[
{
"msg_contents": "The reason for the tag as to be able to return to the 6.5 release source\ncode. It's production code, and should be accessible at least for the next\ncouple of months.\n\nWas a tag created for 6.5.1? The object is to be able to check out any\nparticular release, bugs and all, whenever we feel like it.\n\nMikeA\n\n\n\n>> \n>> \n>> Tatsuo Ishii <[email protected]> writes:\n>> > Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\n>> > CVS respository. I thought that REL6_5_PATCHES is the Tag \n>> for the 6.5\n>> > statble tree and would eventually become 6.5.2. If so, what is the\n>> > REL6_5 Tag? Or I totally miss the point?\n>> \n>> Right, REL6_5_PATCHES is the 6.5.* branch. REL6_5 is just a tag ---\n>> that is, it's effectively a frozen snapshot of the 6.5 release,\n>> not an evolvable branch.\n>> \n>> I am not sure if Marc intends to continue this naming convention\n>> in future, or if it was just a mistake to create REL6_5 as a tag\n>> not a branch. I don't see a whole lot of use for the frozen tag\n>> myself...\n>> \n>> \t\t\tregards, tom lane\n>> \n>> ************\n>> \n",
"msg_date": "Wed, 25 Aug 1999 15:56:30 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] vacuum process size "
},
{
"msg_contents": "On Wed, 25 Aug 1999, Ansley, Michael wrote:\n\n> The reason for the tag as to be able to return to the 6.5 release source\n> code. It's production code, and should be accessible at least for the next\n> couple of months.\n> \n> Was a tag created for 6.5.1? The object is to be able to check out any\n> particular release, bugs and all, whenever we feel like it.\n\nNever did v6.5.1...but I have no problem with starting to do this on minor\nreleases to, since...\n\nCould someone try out the following patch? \n\nftp://ftp.postgresql.org/pub/postgresql-6.5-6.5.x.patch.gz\n\nIt is a patch against v6.5 that will bring it up to the most stable\nversion *if* it worked right. Reading through the patch, everything looks\ngood, but...\n\nIf this actually works, we just might have a way of saving ppl downloading\n5Meg files on minor releases, as the above is <100k :)\n\n\n > \n> MikeA\n> \n> \n> \n> >> \n> >> \n> >> Tatsuo Ishii <[email protected]> writes:\n> >> > Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\n> >> > CVS respository. I thought that REL6_5_PATCHES is the Tag \n> >> for the 6.5\n> >> > statble tree and would eventually become 6.5.2. If so, what is the\n> >> > REL6_5 Tag? Or I totally miss the point?\n> >> \n> >> Right, REL6_5_PATCHES is the 6.5.* branch. REL6_5 is just a tag ---\n> >> that is, it's effectively a frozen snapshot of the 6.5 release,\n> >> not an evolvable branch.\n> >> \n> >> I am not sure if Marc intends to continue this naming convention\n> >> in future, or if it was just a mistake to create REL6_5 as a tag\n> >> not a branch. I don't see a whole lot of use for the frozen tag\n> >> myself...\n> >> \n> >> \t\t\tregards, tom lane\n> >> \n> >> ************\n> >> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 25 Aug 1999 11:19:21 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] vacuum process size "
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> The reason for the tag as to be able to return to the 6.5 release source\n> code. It's production code, and should be accessible at least for the next\n> couple of months.\n> Was a tag created for 6.5.1? The object is to be able to check out any\n> particular release, bugs and all, whenever we feel like it.\n\nYou can always do a checkout by date if you need to capture the state of\nthe cvs tree at some particular past time. Frozen tags are just a (very\ninefficient) way of remembering specific past times that you think are\nlikely to be of interest.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Aug 1999 10:32:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum process size "
},
{
"msg_contents": "On Wed, 25 Aug 1999, Tom Lane wrote:\n\n> \"Ansley, Michael\" <[email protected]> writes:\n> > The reason for the tag as to be able to return to the 6.5 release source\n> > code. It's production code, and should be accessible at least for the next\n> > couple of months.\n> > Was a tag created for 6.5.1? The object is to be able to check out any\n> > particular release, bugs and all, whenever we feel like it.\n> \n> You can always do a checkout by date if you need to capture the state of\n> the cvs tree at some particular past time. Frozen tags are just a (very\n> inefficient) way of remembering specific past times that you think are\n> likely to be of interest.\n\nOkay, you lost me on this one...why is it inefficient to tag the tree on\nthe date of a release vs trying to remember that date? *raised eyebrow*\nIn fact, vs trying to remember the exact date *and* time of a release?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 25 Aug 1999 11:55:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum process size "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Okay, you lost me on this one...why is it inefficient to tag the tree on\n> the date of a release vs trying to remember that date? *raised eyebrow*\n> In fact, vs trying to remember the exact date *and* time of a release?\n\nBecause you make an entry \"REL6_5 => something or other\" in *every*\n*single* *file* of the CVS tree. It'd be more logical to store\n\"REL6_5 => 25 Aug 1999 11:55:32 -0300 (ADT)\", or some such, in one\nplace. Dunno why the CVS people didn't think of that.\n\nInefficient though it be, I agree it's better than trying to remember\nthe release timestamps manually.\n\nI'd suggest, though, that from here on out we use the short strings\nlike \"REL6_6\" for the branches, since people have much more need to\nrefer to the branches than specific release points. Tags for releases\ncould maybe be called \"REL6_6_0\", \"REL6_6_1\", etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Aug 1999 11:02:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum process size "
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> Never did v6.5.1...but I have no problem with starting to do this on minor\n> releases to, since...\n> \n> Could someone try out the following patch?\n> \n> ftp://ftp.postgresql.org/pub/postgresql-6.5-6.5.x.patch.gz\n> \n> It is a patch against v6.5 that will bring it up to the most stable\n> version *if* it worked right. Reading through the patch, everything looks\n> good, but...\n\nGreat idea! It will be good practice - to have simply patches for\nminor versions. But this is definitely not a patch for 6.5.0, but\nsome other version. Unfortunately I lost .tar.gz 6.5.0 distribution,\nbut I am pretty sure that my sources were intact. There were a lot of \nhunks failed, and patched version failed to compile. \nIsn't is the right way to do a patch: take old distribution \nand simply make a diff against new tree? Seems that current \npatch isn't done that way. Included here is patch log file \nfor your reference.\n \n\n-- \nLeon.",
"msg_date": "Wed, 25 Aug 1999 21:35:10 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum process size"
}
] |
[
{
"msg_contents": "Yes, all that too ;-)\n>> \n>> On Wed, 25 Aug 1999, Ansley, Michael wrote:\n>> \n>> > The reason for the tag as to be able to return to the 6.5 release\nsource\n>> > code. It's production code, and should be accessible at least for the\nnext\n>> > couple of months.\n>> > \n>> > Was a tag created for 6.5.1? The object is to be able to check out any\n>> > particular release, bugs and all, whenever we feel like it.\n>> \n>> Never did v6.5.1...but I have no problem with starting to do \n>> this on minor\n>> releases to, since...\n>> \n>> Could someone try out the following patch? \n>> \n>> ftp://ftp.postgresql.org/pub/postgresql-6.5-6.5.x.patch.gz\n>> \n>> It is a patch against v6.5 that will bring it up to the most stable\n>> version *if* it worked right. Reading through the patch, \n>> everything looks\n>> good, but...\n>> \n>> If this actually works, we just might have a way of saving \n>> ppl downloading\n>> 5Meg files on minor releases, as the above is <100k :)\n>> \n>> \n>> > \n>> > MikeA\n>> > \n>> > \n>> > \n>> > >> \n>> > >> \n>> > >> Tatsuo Ishii <[email protected]> writes:\n>> > >> > Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in\nthe\n>> > >> > CVS respository. I thought that REL6_5_PATCHES is the Tag for the\n6.5\n>> > >> > statble tree and would eventually become 6.5.2. If so, what is the\n>> > >> > REL6_5 Tag? Or I totally miss the point?\n>> > >> \n>> > >> Right, REL6_5_PATCHES is the 6.5.* branch. REL6_5 is just a tag ---\n>> > >> that is, it's effectively a frozen snapshot of the 6.5 release,\n>> > >> not an evolvable branch.\n>> > >> \n>> > >> I am not sure if Marc intends to continue this naming convention\n>> > >> in future, or if it was just a mistake to create REL6_5 as a tag\n>> > >> not a branch. I don't see a whole lot of use for the frozen tag\n>> > >> myself...\n>> > >> \n>> > >> \t\t\tregards, tom lane\n>> > >> \n>> > >> ************\n>> > >> \n>> > \n>> > ************\n>> > \n>> \n>> Marc G. Fournier ICQ#7615664 \n>> IRC Nick: Scrappy\n>> Systems Administrator @ hub.org \n>> primary: [email protected] secondary: \n>> scrappy@{freebsd|postgresql}.org \n>> \n",
"msg_date": "Wed, 25 Aug 1999 16:17:57 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] vacuum process size "
}
] |
[
{
"msg_contents": "What I remember REL6_5 tags the version 6.5.1,\nso probably the tags should read REL6_5_0, REL6_5_1 ....\nto show that fact.\n\n> I like the frozen tag myself, since, in the future, if we need to create a\n> quick tar ball of what things looked like at that release (ie.\n> v6.5->v6.5.2 patch?), its easy to generate...\n> \nYes, very handy.\n\nAndreas\n",
"msg_date": "Wed, 25 Aug 1999 16:33:02 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] vacuum process size "
}
] |
[
{
"msg_contents": "\nHello hackers,\n\nI'm sorry to clutter email but I've been delving into mail archives enough\nto know that a stuck spinlock is a problem but I haven't been able to\ndetermine what the solution is.\n\nI'm running postgresql 6.5 beta 1 on WindowsNT4.0 under the Cygwin\nenvironment. I've actually tried this on two different machines with two\ndifferent results. Since this is under a Cygwin environment I realize\nthis could very well be a cygwin problem but since I'm having an error\nmessage similar to what I found in the archives I thought I would try\nhere first.\n\n---------------------------------- \nOn a Dell notebook with 96MB of memory things seem to be okay but I do get\nthe following error message:\n\nError semaphore semaphore not equal 0\n\n\n----------------------------------\nOn a generic Intel-based PC with 96MB of memory I get this error:\n\nFATAL: s_lock(14190065) at spin.c:125, stuck spinlock. Aborting.\n\n\nI'm not very intimate with database code so I'm not sure if the semaphore\nerror will give me problems down the road. Right now it prints the error\nmessage out about 4 times but keeps on going with the test case I have.\n\nI can't get beyond the spin lock problem on the other machine.\n\nI'm not sure why I get two different behaviors on the two machines. I've\ndiff'ed all the configure and compilation outputs and everything seems to\nbe the same on these machines so I'm guessing this might be a system\nresource problem.\n\nAny help on this is greatly appreciated.\nThanks,\nLori Allen\n\n\n-----------------------------\nLori Allen\t\t\t\nSystem Administrator\t\nComputing Research Laboratory\nNew Mexico State University\nBox 30001, Dept. 3CRL\t\nLas Cruces, NM 88003\nemail: [email protected]\n\n",
"msg_date": "Wed, 25 Aug 1999 11:40:26 -0600 (MDT)",
"msg_from": "Lori Allen <[email protected]>",
"msg_from_op": true,
"msg_subject": "stuck spinlock"
}
] |
[
{
"msg_contents": "i try to dump records out , just two colums using\n\npsql emq_frontend -c \"select 1column,2column from table;\">twocolumnsout\n\nfrom a 1.2 mill db\n\nand it gives me either out of memory\nor backend close or\nsent d b4 t errors\nalso a setallocmem()\nsorry for psuedo quoting\n\nhas anyone ever seen this\n\n\n\n",
"msg_date": "Wed, 25 Aug 1999 22:55:41 +0000",
"msg_from": "Clayton Cottingham <[email protected]>",
"msg_from_op": true,
"msg_subject": "out of memory errors"
}
] |
[
{
"msg_contents": "This is three files IMHO useful for community\n\n1. SysV start script for postgres, also can be used under freebsd\n (see below)\n2. simple utility su_postgres, it make su to user postgres whether or not\n shell for it specified, is better than su for security reason \n3. configure/configure.in checking for ps style \n BSD ps -ax or SysV ps -e\n\n\nProbably, there is the reason to include it (after some improvement0 into\ndistribution.\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n\n============================================================================\n#ident \"@(#)/etc/init.d/pgsql 1.0 26/08/99 dms\"\n\nPG_HOME=\"/usr/local/pgsql\"\nPG_DATA=\"$PG_HOME/data\"\nUDS=\"/tmp/.s.PGSQL.5432\"\n\nPS=\"@PS@\"\nGREP=\"@GREP@\"\n\ncase \"$1\" in\n'start')\n # If no postgres run, remove UDS and start postgres.\n pid=\n set -- `$PS | $GREP postmaster | $GREP -v grep`\n [ $? -eq 0 ] && pid=$1\n\n if [ -z \"$pid\" ]; then\n rm -f \"$UDS\"\n $PG_HOME/bin/su_postgres \"$PG_HOME/bin/postmaster -D $PG_DATA\n-b $PG_HOME/bin/postgres -i -S -o -F &\"\n echo \"Postgres started\"\n else\n echo \"Postmaster already run with pid $pid\"\n fi\n ;;\n'stop')\n pid=\n set -- `$PS | $GREP postmaster | $GREP -v grep`\n [ $? -eq 0 ] && pid=$1\n\n if [ -z \"$pid\" ]; then\n echo \"Postgres not run\"\n else\n echo \"Stoping postmaster with pid $pid\" \n kill $pid\n fi\n\n ;;\n*)\n echo \"USAGE: $0 {start | stop}\"\n ;;\nesac",
"msg_date": "Thu, 26 Aug 1999 20:43:14 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Files ..."
},
{
"msg_contents": "These look awful custom to me. Not sure how to integrate them.\n\n\n[Charset KOI8-R unsupported, filtering to ASCII...]\n> This is three files IMHO useful for community\n> \n> 1. SysV start script for postgres, also can be used under freebsd\n> (see below)\n> 2. simple utility su_postgres, it make su to user postgres whether or not\n> shell for it specified, is better than su for security reason \n> 3. configure/configure.in checking for ps style \n> BSD ps -ax or SysV ps -e\n> \n> \n> Probably, there is the reason to include it (after some improvement0 into\n> distribution.\n> \n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n> \n> ============================================================================\n> #ident \"@(#)/etc/init.d/pgsql 1.0 26/08/99 dms\"\n> \n> PG_HOME=\"/usr/local/pgsql\"\n> PG_DATA=\"$PG_HOME/data\"\n> UDS=\"/tmp/.s.PGSQL.5432\"\n> \n> PS=\"@PS@\"\n> GREP=\"@GREP@\"\n> \n> case \"$1\" in\n> 'start')\n> # If no postgres run, remove UDS and start postgres.\n> pid=\n> set -- `$PS | $GREP postmaster | $GREP -v grep`\n> [ $? -eq 0 ] && pid=$1\n> \n> if [ -z \"$pid\" ]; then\n> rm -f \"$UDS\"\n> $PG_HOME/bin/su_postgres \"$PG_HOME/bin/postmaster -D $PG_DATA\n> -b $PG_HOME/bin/postgres -i -S -o -F &\"\n> echo \"Postgres started\"\n> else\n> echo \"Postmaster already run with pid $pid\"\n> fi\n> ;;\n> 'stop')\n> pid=\n> set -- `$PS | $GREP postmaster | $GREP -v grep`\n> [ $? -eq 0 ] && pid=$1\n> \n> if [ -z \"$pid\" ]; then\n> echo \"Postgres not run\"\n> else\n> echo \"Stoping postmaster with pid $pid\" \n> kill $pid\n> fi\n> \n> ;;\n> *)\n> echo \"USAGE: $0 {start | stop}\"\n> ;;\n> esac\n> \n> \nContent-Description: pg_run.tar.gz\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 15:41:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Files ..."
}
] |
[
{
"msg_contents": "I have written a program to convert HTML to troff, so I am attaching a\nPDF version of our FAQ.\n\nThe output looks pretty good, and required no hand-tuning.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Thu, 26 Aug 1999 14:39:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG FAQ in PDF format"
}
] |
[
{
"msg_contents": "PGSQL 6.5.0, FreeBSD 3.2, Intel Pentium II 366MHz, 128 MB\n\nThe table below was filled with about 1,000,000 records. Then a bunch of\nindexes was created and a VACUUM executed.\n\nI was under impression that when max(<primary key>) is called, it should\njust take the value from the index. I believe it should not do any kind of\nscan. But, in fact, it scans the table.\n\nselect max(id) from ItemBars\n\ntakes well over 10 seconds to complete. It's almost instantaneous on MSSQL\nor on Interbase. Something is clearly wrong. MAX() on the primary key should\nnot take so much time. Am I doing something wrong? Is it a known bug? If\nit's a bug then it's a show stopper for us. How can I help fixing it?\n\nCREATE TABLE ItemBars (\n ID SERIAL PRIMARY KEY ,\n ItemID INT NOT NULL ,\n Interv INT NOT NULL ,\n StaTS DATETIME NOT NULL ,\n EndTS DATETIME NOT NULL ,\n IsActive BOOL NOT NULL ,\n Opn FLOAT(7) NOT NULL ,\n High FLOAT(7) NOT NULL ,\n Low FLOAT(7) NOT NULL ,\n Cls FLOAT(7) NOT NULL ,\n Vol INT NOT NULL\n);\n\nGene Sokolov.\n\n\n",
"msg_date": "Fri, 27 Aug 1999 18:45:47 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of MIN() and MAX()"
},
{
"msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> I was under impression that when max(<primary key>) is called, it should\n> just take the value from the index. I believe it should not do any kind of\n> scan. But, in fact, it scans the table.\n\nYou are mistaken. Postgres has no idea that min() and max() have any\nsemantics that have anything to do with indexes. I would like to see\nthat optimization myself, but it's not a particularly easy thing to add\ngiven the system structure and the emphasis on datatype extensibility.\n\n> it's a show stopper for us.\n\nYou might be able to hack around the issue with queries like\n\n\tSELECT x FROM table ORDER BY x LIMIT 1;\n\n\tSELECT x FROM table ORDER BY x DESC LIMIT 1;\n\nto get the min and max respectively. The current 6.6 code will\nimplement these with indexscans, although I think 6.5 would not\nunless given an additional cue, like a WHERE clause involving x...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Aug 1999 11:29:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance of MIN() and MAX() "
},
{
"msg_contents": "From: Tom Lane <[email protected]>\n> > I was under impression that when max(<primary key>) is called, it should\n> > just take the value from the index. I believe it should not do any kind\nof\n> > scan. But, in fact, it scans the table.\n>\n> You are mistaken. Postgres has no idea that min() and max() have any\n> semantics that have anything to do with indexes. I would like to see\n> that optimization myself, but it's not a particularly easy thing to add\n> given the system structure and the emphasis on datatype extensibility.\n>\n> > it's a show stopper for us.\n>\n> You might be able to hack around the issue with queries like\n>\n> SELECT x FROM table ORDER BY x LIMIT 1;\n> SELECT x FROM table ORDER BY x DESC LIMIT 1;\n\nIt is a real show stopper. No luck completely, the indexes are ignored:\n\n*************************************************************\n[PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n\nbars=> create index bars_id on itemsbars(id);\nCREATE\nbars=> explain select id from itemsbars order by id limit 1;\nNOTICE: QUERY PLAN:\n\nSort (cost=44404.41 rows=969073 width=4)\n -> Seq Scan on itemsbars (cost=44404.41 rows=969073 width=4)\n\nEXPLAIN\nbars=> \\d itemsbars\nTable = itemsbars\n+--------------------+----------------------------------+-------+\n| Field | Type | Length|\n+--------------------+----------------------------------+-------+\n| id | int4 not null default nextval('\" | 4 |\n| itemid | int4 not null | 4 |\n| interv | int4 not null | 4 |\n| stats | datetime not null | 8 |\n| endts | datetime not null | 8 |\n| isactive | bool not null | 1 |\n| opn | float8 not null | 8 |\n| high | float8 not null | 8 |\n| low | float8 not null | 8 |\n| cls | float8 not null | 8 |\n| vol | int4 not null | 4 |\n+--------------------+----------------------------------+-------+\nIndices: bars_complex2\n bars_endts\n bars_id\n bars_interv\n bars_itemid\n bars_stats\n itemsbars_pkey\n\n\n\n\n\n",
"msg_date": "Tue, 14 Sep 1999 15:44:18 +0400",
"msg_from": "\"Gene Sokolov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Performance of MIN() and MAX() "
},
{
"msg_contents": "\"Gene Sokolov\" <[email protected]> writes:\n> From: Tom Lane <[email protected]>\n>> You might be able to hack around the issue with queries like\n>> SELECT x FROM table ORDER BY x LIMIT 1;\n>> SELECT x FROM table ORDER BY x DESC LIMIT 1;\n\n> It is a real show stopper. No luck completely, the indexes are ignored:\n\n> bars=> explain select id from itemsbars order by id limit 1;\n> NOTICE: QUERY PLAN:\n> Sort (cost=44404.41 rows=969073 width=4)\n> -> Seq Scan on itemsbars (cost=44404.41 rows=969073 width=4)\n\nYes, you missed my comment that 6.5.* needs some help or it won't\nconsider an index scan at all. This gives the right sort of plan:\n\nregression=> explain select id from itemsbars where id > 0 order by id limit 1;\nNOTICE: QUERY PLAN:\nIndex Scan using itemsbars_id_key on itemsbars (cost=21.67 rows=334 width=4)\n\nThe WHERE clause can be chosen so that it won't actually exclude\nanything, but there has to be a WHERE clause that looks useful with\nan index or an indexscan plan won't even get generated. (Also,\nthe DESC case doesn't work in 6.5.*, unless you apply the backwards-\nindex-scan patch that Hiroshi posted a few weeks ago.)\n\nThis is fixed in current sources, BTW:\n\nregression=> explain select id from itemsbars order by id limit 1;\nNOTICE: QUERY PLAN:\nIndex Scan using bars_id on itemsbars (cost=62.00 rows=1000 width=4)\nregression=> explain select id from itemsbars order by id desc ;\nNOTICE: QUERY PLAN:\nIndex Scan Backward using bars_id on itemsbars (cost=62.00 rows=1000 width=4)\n\nalthough we still need to do some rejiggering of the cost estimation\nrules; current sources are probably *too* eager to use an indexscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 1999 10:25:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance of MIN() and MAX() "
},
{
"msg_contents": "\nTom Lane wrote in message <[email protected]>...\n\n>although we still need to do some rejiggering of the cost estimation\n>rules; current sources are probably *too* eager to use an indexscan.\n>\n\n\n I did some testing today on a 1.6 million row table of random integers\nin the range of 0..32767. Using explain I could get a \"select max(f1)...\"\ndown to a cost of about 30000 using a where clause of \"f1 > 0\"...\n\n After running the queries I achieved the following results (dual P133,\nw/ 128 megs ram, IDE)...\n\n select max(f1) from t1 [68 seconds] [explain cost 60644.00]\n select max(f1) from t1 where f1 > 0 [148 seconds] [explain cost\n30416.67]\n\n Knowing my data does have at least one value above 30000 I can apply a\nbetter heuristic other than f1 > 0\n\n select max(f1) from t1 where f1 > 30000 [12.43 seconds] [explain cost\n30416.67]\n\n Until more agg. function optimizations are implemented, programmers\nmight have to use the old melon to speed things up.\n\n Damond\n\n\n\n\n",
"msg_date": "Wed, 15 Sep 1999 01:40:47 GMT",
"msg_from": "\"Damond Walker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Performance of MIN() and MAX()"
}
] |
[
{
"msg_contents": "I posted this message in pgsql-novice, but got no answer.\nI dare forward it to pgsql-hackers, though I fear it could be a misuse\nof that list.\n\nCould you post an answer on novice since I've not subscribed to hackers.\n(Too high level for me, I think)\n\nSpirou wrote:\n> \n> I can't find my classes anymore (cf below ...) !\n> \n> I send you a few SQL commands, you'll understand what I mean :\n> \n> ---------------------------\n> INSERT INTO pg_group VALUES ('http_user')\n> CREATE USER \"www-data\" IN GROUP http_user;\n> CREATE USER nobody IN GROUP http_user;\n> \n> mybase=> select * from pg_group;\n> groname |grosysid|grolist\n> ---------+--------+-------\n> http_user| |\n> (1 row)\n> \n> mybase=> \\z\n> NOTICE: get_groname: group 0 not found\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> \n> mybase=> select * from pg_class;\n> NOTICE: get_groname: group 0 not found\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> ----------------------------\n> \n> Perhaps is it worth to mention I had created a first group,\n> whose name was 'http' then I deleted this one and created\n> the new one called 'http_user'.\n> \n> Is it a bug or a bad use ?\n> Could you tell what you think of it ?\n> \n> Thanks\n> \n\n-- \nSpirou\n",
"msg_date": "Sat, 28 Aug 1999 22:26:13 +0200",
"msg_from": "Spirou <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: bug ? get_groname: group 0 not found]"
},
{
"msg_contents": "Spirou <[email protected]> writes:\n>> I can't find my classes anymore (cf below ...) !\n>> mybase=> \\z\n>> NOTICE: get_groname: group 0 not found\n>> pqReadData() -- backend closed the channel unexpectedly.\n>> This probably means the backend terminated abnormally\n>> before or while processing the request.\n\nYup, that's a bug alright. I recall having heard of these symptoms\nbefore --- try searching the pghackers archives for mention of\n'get_groname'. I think this may be fixed in the latest release\n(what version are you running, anyway?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Aug 1999 20:49:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [Fwd: bug ? get_groname: group 0 not found] "
}
] |
[
{
"msg_contents": "The syntax needed for defining functions is quite annoying because of the\nneeded quoting. Example:\n\nCREATE FUNCTION bla (int4) RETURNS char AS '\n BEGIN\n RETURN ''Two quotes needed''\n END;\n' LANGUAGE 'plpgsql';\n\nHow about allowing something like in shell here-documents:\n\nCREATE FUNCTION bla (int4) RETURNS char LANGUAGE 'plpgsql' UNTIL '__EOF__';\n BEGIN\n RETURN 'Only one quote needed'\n END;\n__EOF__\n\nOr similar. The __EOF__ is of course totally arbitrary and every string can\nbe used, so this will work with any language used.\n\njochen\n\n",
"msg_date": "Sat, 28 Aug 1999 23:19:30 +0200",
"msg_from": "Jochen Topf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Quoting in stored procedures"
}
] |
[
{
"msg_contents": "The syntax needed for defining functions is quite annoying because of the\nneeded quoting. Example:\n\nCREATE FUNCTION bla (int4) RETURNS char AS '\n BEGIN\n RETURN ''Two quotes needed''\n END;\n' LANGUAGE 'plpgsql';\n \nHow about allowing something like in shell here-documents:\n \nCREATE FUNCTION bla (int4) RETURNS char LANGUAGE 'plpgsql' UNTIL '__EOF__';\n BEGIN\n RETURN 'Only one quote needed'\n END;\n__EOF__\n\nOr similar. The __EOF__ is of course totally arbitrary and every string can\nbe used, so this will work with any language used.\n\njochen\n",
"msg_date": "Sat, 28 Aug 1999 17:27:46 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Quoting in stored procedures"
},
{
"msg_contents": "[email protected] writes:\n> The syntax needed for defining functions is quite annoying because of the\n> needed quoting.\n\nActually, considering that you probably don't want to be typing (and\nretyping) procedure definitions right into psql, it seems to me that\nwhat's really wanted is a procedure editor. Once you've got that,\nit could mask annoying little details like doubling quotes.\n\nAnother thing that I'd like to see along these same lines is an \"UPDATE\nFUNCTION\" command that allows the stored text for a PL function\ndefinition to be replaced, without dropping/recreating the function\ndefinition. Drop/recreate changes the function OID, thereby breaking\ntriggers that use it, and is generally a real pain when you're trying\nto debug a bit of plpgsql...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Aug 1999 20:54:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Quoting in stored procedures "
}
] |
[
{
"msg_contents": "Hi.\n\nThe entries entered in pg_shadow haven't ever worked for me. I've tried a\nnumber of times without success. If I update a user in there and set a\npassword for them:\npostgres=> select * from pg_shadow;\nusename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil \n---------+--------+-----------+--------+--------+---------+-------+----------------------------\npostgres | 100|t |t |t |t | |Sat Jan31 01:00:00 2037 EST\nuser1 | 1001|f |t |f |t | | \nequipment| 1004|f |t |f |t | MYPASS| \n(3 rows)\n\nThis example assumes I've set my password to 'MYPASS'.\nNow I change pg_hba.conf to have a:\nhost equipment 123.123.123.123 255.255.0.0 password \n\nAssuming my IP is 123.123.123.123 and the database I need to connect to is\ncalled equipment and the user is of course equipment...\n\nI've restarted the server and...\n\nNow I run off to my remote machine and try to connect...\n\npsql -u -h test.mypostgresserverdomain.com equipment\nUsername: equipment\nPassword: \n\nConnection to database 'equipment' failed.\nPassword authentication failed for user 'equipment'\n\nAny ideas on what the heck I might be forgetting to do or not doing\nproperly?\n\nI'm starting postgres up as:\n su -l postgres -c 'exec /usr/local/pgsql/bin/postmaster\n-D/dr/raid0/postgres/pgdata -d 1 -i -o \"-E -F -S 16384 -o\n/usr/local/pgsql/home/logfile\" -s >> /usr/local/pgsql/home/errlog 2>&1\n/usr/local/pgsql/home/errlog1 &' \n\nIn the server's errlog file I find:\nPassword authentication failed for user 'equipment'\n\nIt would be really nice if I'd see something like:\nSat Aug 28 21:43:39 EDT 1999 - Password authentication failed from\n123.123.123.123 on database 'equipment'\n\n-Michael\n\n",
"msg_date": "Sat, 28 Aug 1999 21:40:29 -0300 (ADT)",
"msg_from": "Michael Richards <[email protected]>",
"msg_from_op": true,
"msg_subject": "entries in pg_shadow"
},
{
"msg_contents": "Michael Richards <[email protected]> writes:\n> The entries entered in pg_shadow haven't ever worked for me. I've tried a\n> number of times without success. If I update a user in there and set a\n> password for them:\n\nIIRC, the only way to set a password that actually works is ALTER USER.\n\nThe reason direct SQL hacking on pg_shadow doesn't work is that\npg_shadow isn't what the postmaster looks at (the PM itself can't do\ndatabase operations without getting into possible deadlock situations).\nThere's a flat text file somewhere that contains the Real Info. ALTER\nUSER and friends know to rewrite the flat file after updating pg_shadow.\n\nThis is documented somewhere, I think, but not nearly prominently\nenough...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Aug 1999 21:00:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] entries in pg_shadow "
},
{
"msg_contents": "> Hi.\n> \n> The entries entered in pg_shadow haven't ever worked for me. I've tried a\n> number of times without success. If I update a user in there and set a\n> password for them:\n> postgres=> select * from pg_shadow;\n> usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil \n> ---------+--------+-----------+--------+--------+---------+-------+----------------------------\n> postgres | 100|t |t |t |t | |Sat Jan31 01:00:00 2037 EST\n> user1 | 1001|f |t |f |t | | \n> equipment| 1004|f |t |f |t | MYPASS| \n> (3 rows)\n> \n> This example assumes I've set my password to 'MYPASS'.\n> Now I change pg_hba.conf to have a:\n> host equipment 123.123.123.123 255.255.0.0 password \n> \n> Assuming my IP is 123.123.123.123 and the database I need to connect to is\n> called equipment and the user is of course equipment...\n> \n> I've restarted the server and...\n\nYou may need to restart the postmaster, or do a dummy change to a user. \nThere is a flat file that contains the pg_shadow contents that gets\nupdated with normal USER commands, but SQL commands don't update it. It\nis on our TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 19:23:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] entries in pg_shadow"
}
] |
[
{
"msg_contents": "Hello all,\n\ntrying the new version of PgAccess (hope tomorrow will be available) I\ndiscovered that clustering a table on an index loose also the NOT NULL\nattributes from the original table. I know that the permissions are also\nlost but didn't read anywhere about the NOT NULL.\n\nBest regards,\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Sun, 29 Aug 1999 07:36:30 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cluster on (index-name) loose NOT NULL properties"
}
] |
[
{
"msg_contents": "Hello,\n\nis there any rough estimate of when Postgres ANSI SQL compliance\nis planned to be implemented? Will it be in 6.6, 6.7, or later?\nAny info would be appreciated.\n\nNejc\n\n-- \n''Share and Enjoy.''\n\n",
"msg_date": "Sun, 29 Aug 1999 14:43:16 +0200",
"msg_from": "Jernej Zajc <[email protected]>",
"msg_from_op": true,
"msg_subject": "ANSI SQL compliance"
},
{
"msg_contents": "Jernej Zajc <[email protected]> writes:\n> is there any rough estimate of when Postgres ANSI SQL compliance\n> is planned to be implemented? Will it be in 6.6, 6.7, or later?\n\nWhat do you define as \"ANSI compliance\"?\n\nThere isn't any master plan that says \"we will have every single\nSQL92 feature implemented by release N\". (In fact, as far as I\ncan tell there's no master plan at all ;-).) If there are particular\nfeatures you're in need of, mentioning what they are might help\npush up their priorities in the minds of the developers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Aug 1999 12:48:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ANSI SQL compliance "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jernej Zajc <[email protected]> writes:\n> > is there any rough estimate of when Postgres ANSI SQL compliance\n> > is planned to be implemented? Will it be in 6.6, 6.7, or later?\n> \n> What do you define as \"ANSI compliance\"?\n\nWell you asked the wrong person - people at ANSI should know\nbetter :-)\nI meant adherence to the ANSI SQL standard.\n\n> There isn't any master plan that says \"we will have every single\n> SQL92 feature implemented by release N\". (In fact, as far as I\n> can tell there's no master plan at all ;-).) If there are particular\n> features you're in need of, mentioning what they are might help\n> push up their priorities in the minds of the developers.\n\nI'm not interested in any particular feature yet. I'm in\nprocess of migrating to some serious Unix RDBMS as need\nforces me to and a friend of mine advised Postgres as\ntechnicaly brilliant. I asked merely of curiosity, not need.\n\nNejc\n\n-- \n''Share and Enjoy.''\n\n",
"msg_date": "Sun, 29 Aug 1999 21:49:01 +0200",
"msg_from": "Jernej Zajc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ANSI SQL compliance"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jernej Zajc <[email protected]> writes:\n> > is there any rough estimate of when Postgres ANSI SQL compliance\n> > is planned to be implemented? Will it be in 6.6, 6.7, or later?\n> \n> What do you define as \"ANSI compliance\"?\n\nMaybe he means stripping of all extensions to ANSI :)\n\n> There isn't any master plan that says \"we will have every single\n> SQL92 feature implemented by release N\". (In fact, as far as I\n> can tell there's no master plan at all ;-).)\n\nAFAIK there is no single dtatbase, commercial or free, that has \nevery single feature of SQL92 implemented.\n\n-----------\nHannu\n",
"msg_date": "Sun, 29 Aug 1999 23:23:54 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ANSI SQL compliance"
},
{
"msg_contents": "> > > is there any rough estimate of when Postgres ANSI SQL compliance\n> > > is planned to be implemented? Will it be in 6.6, 6.7, or later?\n> > There isn't any master plan that says \"we will have every single\n> > SQL92 feature implemented by release N\". (In fact, as far as I\n> > can tell there's no master plan at all ;-).)\n> AFAIK there is no single dtatbase, commercial or free, that has\n> every single feature of SQL92 implemented.\n\nMost commercial databases claim SQL92 compliance based on compliance\nwith the simplest, lowest level defined in the standard. We have many\nfeatures of the two higher levels, as well as stong compliance with\nthe lowest level. We also have significant extensions, some of which\nnow appearing in the SQL3 draft standard (and pioneered by Postgres).\nWe claim to be a \"extended subset\" of the SQL92 standard, which seems\naccurate.\n\nAs was suggested earlier, you must be more specific about which\nfeatures you feel are missing. Some may be coming soon, some may be so\nill-conceived that we would be foolish to damage Postgres by\nimplementing them, and some may be reasonable to do but farther off in\ntime.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 31 Aug 1999 06:46:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ANSI SQL compliance"
}
] |
[
{
"msg_contents": "Are we going to release 6.5.2? If yes, then when?\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 29 Aug 1999 21:45:24 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 6.5.2"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Are we going to release 6.5.2? If yes, then when?\n\nMarc proposed Sept 1 (back on 8/15), and there were no objections...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Aug 1999 12:50:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "On Sun, 29 Aug 1999, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Are we going to release 6.5.2? If yes, then when?\n> \n> Marc proposed Sept 1 (back on 8/15), and there were no objections...\n\nAnd its still the date I'm planning around...So Wednesday this week :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 29 Aug 1999 16:39:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": ">On Sun, 29 Aug 1999, Tom Lane wrote:\n>\n>> Tatsuo Ishii <[email protected]> writes:\n>> > Are we going to release 6.5.2? If yes, then when?\n>> \n>> Marc proposed Sept 1 (back on 8/15), and there were no objections...\n>\n>And its still the date I'm planning around...So Wednesday this week :)\n\nMarc,\n\nCould you make a tarball of 6.5.2-beta or 6.5.2-release-candidate or\nwhatever so that volunteers could get it by anon ftp for testing?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 30 Aug 1999 11:02:59 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "\nTry her out...just put up a release candidate now...\n\nOn Mon, 30 Aug 1999, Tatsuo Ishii wrote:\n\n> >On Sun, 29 Aug 1999, Tom Lane wrote:\n> >\n> >> Tatsuo Ishii <[email protected]> writes:\n> >> > Are we going to release 6.5.2? If yes, then when?\n> >> \n> >> Marc proposed Sept 1 (back on 8/15), and there were no objections...\n> >\n> >And its still the date I'm planning around...So Wednesday this week :)\n> \n> Marc,\n> \n> Could you make a tarball of 6.5.2-beta or 6.5.2-release-candidate or\n> whatever so that volunteers could get it by anon ftp for testing?\n> --\n> Tatsuo Ishii\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 30 Aug 1999 14:03:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "> On Sun, 29 Aug 1999, Tom Lane wrote:\n> \n> > Tatsuo Ishii <[email protected]> writes:\n> > > Are we going to release 6.5.2? If yes, then when?\n> > \n> > Marc proposed Sept 1 (back on 8/15), and there were no objections...\n> \n> And its still the date I'm planning around...So Wednesday this week :)\n> \n\nMay I ask that the patches I submitted two months ago for 6.5.0 are applied\nat least to 6.5.2?\n\nHere is the 6.5.1 version of my patches.\n \n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+",
"msg_date": "Mon, 30 Aug 1999 22:42:37 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "Massimo Dal Zotto <[email protected]> writes:\n> May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> at least to 6.5.2?\n> Here is the 6.5.1 version of my patches.\n\nI don't much care for QueryLimit (we got rid of that for a reason!)\nnor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\nenough... but are we in the business of adding features to 6.5.*,\neven little ones? Maybe it should only go in current.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 13:51:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "On Tue, 31 Aug 1999, Tom Lane wrote:\n\n> Massimo Dal Zotto <[email protected]> writes:\n> > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > at least to 6.5.2?\n> > Here is the 6.5.1 version of my patches.\n> \n> I don't much care for QueryLimit (we got rid of that for a reason!)\n> nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> enough... but are we in the business of adding features to 6.5.*,\n> even little ones? Maybe it should only go in current.\n\n6.5.x is supposed to be *only* fixes, no new features...but I'm curious as\nto why these never got into v6.5.0 in the first place...\n\nMassimo? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 31 Aug 1999 16:51:01 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "> Massimo Dal Zotto <[email protected]> writes:\n> > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > at least to 6.5.2?\n> > Here is the 6.5.1 version of my patches.\n> \n> I don't much care for QueryLimit (we got rid of that for a reason!)\n> nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> enough... but are we in the business of adding features to 6.5.*,\n> even little ones? Maybe it should only go in current.\n\nThe QueryLimit has been reintroduced because it can be used to set a global\ndefault limit for all queries instead of hacking manually some hundred\nqueries. I admit that the LIMIT...OFFSET is a cleaner way to do it, but\nhaving the possibility to specify a global default doesn't hurt. The default\ncan always be overriden with an explicit LIMIT on a single query.\nThe patch uses the same mechanism of the LIMIT clause, so it's safe. It is\nonly a different way to set the limit value.\n\nThe FREE_TUPLE_MEMORY is a temporary fix to avoid huge memory growth in\nsome common situations, until the memory management is rewritten in a\nbetter way. Being a little conditional code in a very few places of the\nsources it can be safely applied and left disabled. Those few people who\nneed the feature, like me, can enable it at their own risk. I admit that\nthis is a kludge but the alternative is in some cases a machine with some\ngigabyte of memory.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Tue, 31 Aug 1999 22:36:41 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> On Tue, 31 Aug 1999, Tom Lane wrote:\n> \n> > Massimo Dal Zotto <[email protected]> writes:\n> > > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > > at least to 6.5.2?\n> > > Here is the 6.5.1 version of my patches.\n> > \n> > I don't much care for QueryLimit (we got rid of that for a reason!)\n> > nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> > enough... but are we in the business of adding features to 6.5.*,\n> > even little ones? Maybe it should only go in current.\n> \n> 6.5.x is supposed to be *only* fixes, no new features...but I'm curious as\n> to why these never got into v6.5.0 in the first place...\n\nBecause they were submitted a few days before the realease date. Bruce told\nme they would go in 6.5.1 but apparently he has forgot them. I hope to see\nthem in 6.5.2.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Tue, 31 Aug 1999 22:43:53 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "Massimo Dal Zotto <[email protected]> writes:\n>> I don't much care for QueryLimit (we got rid of that for a reason!)\n\n> The QueryLimit has been reintroduced because it can be used to set a global\n> default limit for all queries instead of hacking manually some hundred\n> queries. I admit that the LIMIT...OFFSET is a cleaner way to do it, but\n> having the possibility to specify a global default doesn't hurt.\n\nYes it does: it creates the possibility of breaking (returning\nincomplete answers to) queries inside rules, triggers, procedures, etc.\nIn the worst case it could be used by an unprivileged user to subvert\nsecurity checks built into a database by means of rules.\n\nI think this \"feature\" is far too dangerous to put into the general\ndistribution.\n\nWhat would be reasonably safe is a limit that applies *only* to data\nbeing returned to the interactive user, but that would be a different\nmechanism than the LIMIT clause; I'm not sure where it would need to\nbe implemented.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 17:52:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2 "
},
{
"msg_contents": "> >On Sun, 29 Aug 1999, Tom Lane wrote:\n> >\n> >> Tatsuo Ishii <[email protected]> writes:\n> >> > Are we going to release 6.5.2? If yes, then when?\n> >> \n> >> Marc proposed Sept 1 (back on 8/15), and there were no objections...\n> >\n> >And its still the date I'm planning around...So Wednesday this week :)\n> \n> Marc,\n> \n> Could you make a tarball of 6.5.2-beta or 6.5.2-release-candidate or\n> whatever so that volunteers could get it by anon ftp for testing?\n\nI am now back, and have not yet branded the release as 6.5.2. I can do\nit tonight.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 19:30:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> On Tue, 31 Aug 1999, Tom Lane wrote:\n> \n> > Massimo Dal Zotto <[email protected]> writes:\n> > > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > > at least to 6.5.2?\n> > > Here is the 6.5.1 version of my patches.\n> > \n> > I don't much care for QueryLimit (we got rid of that for a reason!)\n> > nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> > enough... but are we in the business of adding features to 6.5.*,\n> > even little ones? Maybe it should only go in current.\n> \n> 6.5.x is supposed to be *only* fixes, no new features...but I'm curious as\n> to why these never got into v6.5.0 in the first place...\n> \n\nI applied the safe ones, like the copy.c one. People objected to most\nof the others, for reasons I have forgotten.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 19:56:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> > On Tue, 31 Aug 1999, Tom Lane wrote:\n> > \n> > > Massimo Dal Zotto <[email protected]> writes:\n> > > > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > > > at least to 6.5.2?\n> > > > Here is the 6.5.1 version of my patches.\n> > > \n> > > I don't much care for QueryLimit (we got rid of that for a reason!)\n> > > nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> > > enough... but are we in the business of adding features to 6.5.*,\n> > > even little ones? Maybe it should only go in current.\n> > \n> > 6.5.x is supposed to be *only* fixes, no new features...but I'm curious as\n> > to why these never got into v6.5.0 in the first place...\n> \n> Because they were submitted a few days before the realease date. Bruce told\n> me they would go in 6.5.1 but apparently he has forgot them. I hope to see\n> them in 6.5.2.\n> \n\nOh, I did? I forgot. Let me try now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 19:57:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> > On Tue, 31 Aug 1999, Tom Lane wrote:\n> > \n> > > Massimo Dal Zotto <[email protected]> writes:\n> > > > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > > > at least to 6.5.2?\n> > > > Here is the 6.5.1 version of my patches.\n> > > \n> > > I don't much care for QueryLimit (we got rid of that for a reason!)\n> > > nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> > > enough... but are we in the business of adding features to 6.5.*,\n> > > even little ones? Maybe it should only go in current.\n> > \n> > 6.5.x is supposed to be *only* fixes, no new features...but I'm curious as\n> > to why these never got into v6.5.0 in the first place...\n> > \n> \n> I applied the safe ones, like the copy.c one. People objected to most\n> of the others, for reasons I have forgotten.\n\nMost objections were beacause they were submitted just before the release\ndate of 6.5.0. Now three months have passed.\nWhich were the unsafe pathes? If an unsafe patch is one that can break\nsome essential piece of code I would classify them in the following way:\n\n array\t\tsafe, important bug fix to my contrib\n\n contrib\t\tsafe, changes to makefiles in my contrib\n\n copy-cancel-query\tsafe, it can't break anything unless you hit ^C\n\n emacs-vars\t\tsafe, only cosmetic changes required by emacs20\n\n free-tuple-mem\tsafe, it is under #ifdef and disabled by default.\n\t\t\tI won't recommend enabling it in a production\n\t\t\tenvironment, but it could be the solution of many\n\t\t\theadaches for some people, like my old sinval\n\t\t\tpatch.\n\n psql-readline\tsafe, it just sets a readline documented variable\n\n set-variable\tmostly safe, except the queryLimit stuff which can\n\t\t\tbe removed if you don't trust it.\n\t\t\tThe pg_options variable can be set only by the\n\t\t\tsuperuser.\n\nIf you don't like the queryLimit stuff I can send you a new patch for the\npg_options variable only.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Fri, 3 Sep 1999 22:27:06 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> > On Sun, 29 Aug 1999, Tom Lane wrote:\n> > \n> > > Tatsuo Ishii <[email protected]> writes:\n> > > > Are we going to release 6.5.2? If yes, then when?\n> > > \n> > > Marc proposed Sept 1 (back on 8/15), and there were no objections...\n> > \n> > And its still the date I'm planning around...So Wednesday this week :)\n> > \n> \n> May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> at least to 6.5.2?\n> \n> Here is the 6.5.1 version of my patches.\n\nI have applied these to 6.6. 6.5.* is only major bug fixes, not even\nminor fixes.\n\nI applied the contrib, copy-cancel, emacs-vars(for trace.* only), psql\nreadline(already applied), and set_variable (pg_options only,\nquery_limit was removed and I don't have agreement to re-add it.)\n\nI skipped the tuple-freemem because someone complained it was a hack. \nHopefully we can address it properly before 6.6.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 16:36:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> Most objections were beacause they were submitted just before the release\n> date of 6.5.0. Now three months have passed.\n> Which were the unsafe pathes? If an unsafe patch is one that can break\n> some essential piece of code I would classify them in the following way:\n> \n> array\t\tsafe, important bug fix to my contrib\n\nApplied.\n\n> \n> contrib\t\tsafe, changes to makefiles in my contrib\n\nApplied.\n\n> \n> copy-cancel-query\tsafe, it can't break anything unless you hit ^C\n\nApplied.\n\n> \n> emacs-vars\t\tsafe, only cosmetic changes required by emacs20\n\nApplied.\n\n> \n> free-tuple-mem\tsafe, it is under #ifdef and disabled by default.\n> \t\t\tI won't recommend enabling it in a production\n> \t\t\tenvironment, but it could be the solution of many\n> \t\t\theadaches for some people, like my old sinval\n> \t\t\tpatch.\n\nWe would rather not add this code.\n\n> \n> psql-readline\tsafe, it just sets a readline documented variable\n\nApplied.\n\n> set-variable\tmostly safe, except the queryLimit stuff which can\n> \t\t\tbe removed if you don't trust it.\n> \t\t\tThe pg_options variable can be set only by the\n> \t\t\tsuperuser.\n\nApplied, except query limit.\n\n> \n> If you don't like the queryLimit stuff I can send you a new patch for the\n> pg_options variable only.\n\nRemoved manually. Thanks. I have been far behind in keeping up with\npatches.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 16:41:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> Massimo Dal Zotto <[email protected]> writes:\n> > May I ask that the patches I submitted two months ago for 6.5.0 are applied\n> > at least to 6.5.2?\n> > Here is the 6.5.1 version of my patches.\n> \n> I don't much care for QueryLimit (we got rid of that for a reason!)\n> nor for the FREE_TUPLE_MEMORY patch, but the rest of this looks safe\n> enough... but are we in the business of adding features to 6.5.*,\n> even little ones? Maybe it should only go in current.\n\nApplied to 6.6 only, without tuple memory or query limit fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 16:54:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> Removed manually. Thanks. I have been far behind in keeping up with\n> patches.\n\nLooks like :)\n\n Well, actually running regression test emits alot of\n\n NOTICE: Auto-creating query reference to table <table-name>\n\n from inside the parser - which make most of the regression\n tests fail. Not sure which of the patches introduced them\n and why. Could you please take a look at it? On the things\n I'm doing right now (adding fields + indices to system\n catalogs and modifying code that's invoked during heap_open()\n or the like) I feel much better if I get identical (+\n correct) regression results before'n'after.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 27 Sep 1999 23:10:12 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> >\n> > Removed manually. Thanks. I have been far behind in keeping up with\n> > patches.\n> \n> Looks like :)\n> \n> Well, actually running regression test emits alot of\n> \n> NOTICE: Auto-creating query reference to table <table-name>\n> \n> from inside the parser - which make most of the regression\n> tests fail. Not sure which of the patches introduced them\n> and why. Could you please take a look at it? On the things\n> I'm doing right now (adding fields + indices to system\n> catalogs and modifying code that's invoked during heap_open()\n> or the like) I feel much better if I get identical (+\n> correct) regression results before'n'after.\n> \n\nThomas, can you re-generate the regression output so my new NOTICE is in\nthere? I tried, but there are too many errno messages differences to\nget just the new NOTICE stuff in there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 22:55:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> >\n> > Removed manually. Thanks. I have been far behind in keeping up with\n> > patches.\n> \n> Looks like :)\n> \n> Well, actually running regression test emits alot of\n> \n> NOTICE: Auto-creating query reference to table <table-name>\n> \n> from inside the parser - which make most of the regression\n> tests fail. Not sure which of the patches introduced them\n> and why. Could you please take a look at it? On the things\n> I'm doing right now (adding fields + indices to system\n> catalogs and modifying code that's invoked during heap_open()\n> or the like) I feel much better if I get identical (+\n> correct) regression results before'n'after.\n\nI have backed this change out. I will re-enable it when things are\nquiet and the regression tests can be re-generated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 29 Sep 1999 17:41:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
},
{
"msg_contents": ">\n> > Bruce Momjian wrote:\n> >\n> > >\n> > > Removed manually. Thanks. I have been far behind in keeping up with\n> > > patches.\n> >\n> > Looks like :)\n> >\n> > Well, actually running regression test emits alot of\n> >\n> > NOTICE: Auto-creating query reference to table <table-name>\n> >\n> > from inside the parser - which make most of the regression\n> > tests fail. Not sure which of the patches introduced them\n> > and why. Could you please take a look at it? On the things\n> > I'm doing right now (adding fields + indices to system\n> > catalogs and modifying code that's invoked during heap_open()\n> > or the like) I feel much better if I get identical (+\n> > correct) regression results before'n'after.\n>\n> I have backed this change out. I will re-enable it when things are\n> quiet and the regression tests can be re-generated.\n\n\n[pgsql@hot] ~/devel/src/test/regress > ./checkresults\n====== int2 ======\n10c10\n< ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n---\n> ERROR: pg_atoi: error reading \"100000\": Math result not representable\n====== int4 ======\n10c10\n< ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n---\n> ERROR: pg_atoi: error reading \"1000000000000\": Math result not representable\n[pgsql@hot] ~/devel/src/test/regress >\n\n\n\n Such a regression result while we're in the middle of feature\n development.\n\n I'm really impressed - if we only can keep it on this level!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 30 Sep 1999 00:16:18 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.2"
}
] |
[
{
"msg_contents": "For some reason we currently support sub-SELECT expressions only\nin WHERE and HAVING clauses, not in the target list of a SELECT.\nDoes anyone know why this is?\n\nThere are a number of places in the planner/optimizer that would need\nto be fixed to make it happen, but the changes are utterly trivial\n(calling certain transformation routines on the targetlist as well as\nfor WHERE and HAVING ... probably about a dozen lines total ...).\nAnd a quick look at the executor doesn't show any reason why it would\nhave a problem, either. Is there something fundamental that I'm\nmissing? If not, why wasn't this done to begin with?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Aug 1999 13:47:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why not sub-selects in targetlists?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> For some reason we currently support sub-SELECT expressions only\n> in WHERE and HAVING clauses, not in the target list of a SELECT.\n> Does anyone know why this is?\n> \n> There are a number of places in the planner/optimizer that would need\n> to be fixed to make it happen, but the changes are utterly trivial\n> (calling certain transformation routines on the targetlist as well as\n> for WHERE and HAVING ... probably about a dozen lines total ...).\n> And a quick look at the executor doesn't show any reason why it would\n> have a problem, either. Is there something fundamental that I'm\n> missing? If not, why wasn't this done to begin with?\n\nAs usual, I just hadn't time to do more than it's done for\n6.3.X -:) Subselects were not in my TODO list, I made base\nimplementation because of there were many requests for them.\n\nBTW, please don't forget subselects in FROM.\n\nVadim\n",
"msg_date": "Mon, 30 Aug 1999 09:39:50 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Why not sub-selects in targetlists?"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> For some reason we currently support sub-SELECT expressions only\n>> in WHERE and HAVING clauses, not in the target list of a SELECT.\n>> Does anyone know why this is?\n\n> As usual, I just hadn't time to do more than it's done for\n> 6.3.X -:) Subselects were not in my TODO list, I made base\n> implementation because of there were many requests for them.\n\nOK, I'll see about adding the missing transformations in the\nplanner. Shouldn't be hard.\n\n> BTW, please don't forget subselects in FROM.\n\nThat seems to be a considerably bigger task :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 1999 11:04:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Why not sub-selects in targetlists? "
}
] |
[
{
"msg_contents": "Hello,\n\nI have a very big table (valori) with the following columns:\n- data datetime\n- debitor float8\n- creditor float8\n\nIt has a btree index on data (non unique).\n\nThe following select is using the index:\n\nselect * from valori where data > '25-10-1999'\n\nNOTICE: QUERY PLAN:\nIndex Scan using valori_data on valori (cost=1550.17 rows=24324\nwidth=8)\n\n\nBut this one:\n\nselect data from valori order by desc limit 1\nNOTICE: QUERY PLAN:\n\nSort (cost=3216.01 rows=72970 width=8)\n -> Seq Scan on valori (cost=3216.01 rows=72970 width=8)\n\nI thought that if the 'order by' implies an column which have a btree\nindex, the sort would not be actually executed and the index will be\nused instead. But it seems that it won't.\n\nThen, the question is : How should I retrieve extremely fast the first\n'data' greater than a given value from that table.\n\nAlso, the following query :\n\nselect max(data) from valori where data<'2-3-1999'\n\nis not using optimally the index, it just limit the records for the\naggregate function instead of picking the first value from the left of\nthe index tree lower than '2-3-1999'.\n\n\nWaiting for some ideas,\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Mon, 30 Aug 1999 06:50:18 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "An optimisation question"
},
{
"msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> select data from valori order by desc limit 1\n> NOTICE: QUERY PLAN:\n> Sort (cost=3216.01 rows=72970 width=8)\n-> Seq Scan on valori (cost=3216.01 rows=72970 width=8)\n\n> I thought that if the 'order by' implies an column which have a btree\n> index, the sort would not be actually executed and the index will be\n> used instead. But it seems that it won't.\n\nThat's fixed for 6.6. A workaround that partially solves the problem\nfor 6.5 is to add a dummy WHERE clause referencing the ORDER-BY item:\n\tselect data from valori where data > '1/1/1800'\n\torder by data limit 1;\nThe WHERE is needed to get the 6.5 optimizer to consider the index\nat all. In a quick test it seems this works for normal order but not\nDESC order... you could try applying the backwards-index patch that\nsomeone (Hiroshi or Tatsuo, I think) posted recently.\n\n> Also, the following query :\n> select max(data) from valori where data<'2-3-1999'\n> is not using optimally the index, it just limit the records for the\n> aggregate function instead of picking the first value from the left of\n> the index tree lower than '2-3-1999'.\n\nThere's no prospect of that happening anytime soon, I fear; there is no\nconnection between aggregate functions and indexes in the system, and\nno easy way of making one. This workaround works in current sources:\n\nexplain select data from valori where data<'2-3-1999'\norder by data desc limit 1;\nNOTICE: QUERY PLAN:\n\nIndex Scan Backward using valori_i on valori (cost=21.67 rows=334 width=8)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 1999 10:33:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] An optimisation question "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> That's fixed for 6.6. A workaround that partially solves the problem\n> for 6.5 is to add a dummy WHERE clause referencing the ORDER-BY item:\n> select data from valori where data > '1/1/1800'\n> order by data limit 1;\n> The WHERE is needed to get the 6.5 optimizer to consider the index\n> at all. In a quick test it seems this works for normal order but not\n> DESC order... you could try applying the backwards-index patch that\n> someone (Hiroshi or Tatsuo, I think) posted recently.\n\nYeap , I will search for it.\n\n> There's no prospect of that happening anytime soon, I fear; there is no\n> connection between aggregate functions and indexes in the system, and\n> no easy way of making one.\n\nUnderstand that, but.\nSelects that deal ONLY with columns included in an index should operate\nexclusively on that index and return the results. Example : select\nsum(price) , price*1.2, max(price) from products , assuming that price\nis included in an index it would be less cost to scan the index rather\nthan the whole table.\n\nI remember that Paradox tables had indexes and the index was also a\nParadox table or some sort of that. Internally it's possible that a\nnumber of procedures related to tables could be applied to indexes. So,\na sum(price) from a_table could be easily switched to be done on any\nindex that contain the price field.\n\nWhat do you think?\n\nConstantin Teodorescu\nFLEX Consulting BRaila, ROMANIA\n",
"msg_date": "Mon, 30 Aug 1999 15:40:51 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] An optimisation question"
}
] |
[
{
"msg_contents": "\n> I posted this message in pgsql-novice, but got no answer.\n> I dare forward it to pgsql-hackers, though I fear it could be a misuse\n> of that list.\n> \nI think it is a good question for the hackers list.\n\n> Spirou wrote:\n> > \n> > I can't find my classes anymore (cf below ...) !\n> \n> > ---------------------------\n> > INSERT INTO pg_group VALUES ('http_user')\n> > CREATE USER \"www-data\" IN GROUP http_user;\n> > CREATE USER nobody IN GROUP http_user;\n> \nLast time I used groups successfully I had to create the group as\na unix group, then insert the unix group id into the grosysid column.\nActually any id that shows in /etc/groups is ok, (but imho a bug).\nI think it is also a requirement, that the group id is not used as a\nuser id in pg_shadow.\nI am not sure that anyone knows the initially intended architecture.\nPlease speak up anybody, if you know the intended/wanted design.\n\nAndreas \n",
"msg_date": "Mon, 30 Aug 1999 11:58:47 +0200",
"msg_from": "Zeugswetter Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] [Fwd: bug ? get_groname: group 0 not found]"
}
] |
[
{
"msg_contents": "Hi,\n\nplease add the file ipc.patch (patch for the cygipc library) into src/win32\ndirectory and apply the patch for README.NT (readme.patch). I think it\nshould go into both the 6.5.2 and current trees.\n\nI have no reaction from the author of the cygipc library yet, so it will be\nbetter to include the patch into the sources of PostgreSQL\n\n\t\t\tDan",
"msg_date": "Mon, 30 Aug 1999 14:15:09 +0200",
"msg_from": "Horak Daniel <[email protected]>",
"msg_from_op": true,
"msg_subject": "IPC on win32 - additions for 6.5.2 and current trees"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Horak Daniel\n> Sent: Monday, August 30, 1999 9:15 PM\n> To: '[email protected]'; '[email protected]'\n> Subject: [HACKERS] IPC on win32 - additions for 6.5.2 and current trees\n> \n> \n> Hi,\n> \n> please add the file ipc.patch (patch for the cygipc library) into \n> src/win32\n> directory and apply the patch for README.NT (readme.patch). I think it\n> should go into both the 6.5.2 and current trees.\n> \n> I have no reaction from the author of the cygipc library yet, so \n> it will be\n> better to include the patch into the sources of PostgreSQL\n>\n\nIt's me who made the patch. Yutaka Tanida also provided a patch\nfor cygipc library to prevent lock freezing by changing the\nimplementation of semaphore.\nThese patches are necessary to prevent freezing in cygwin port. \n \nIf there's no objection,I would add a new ipc.patch provided by \nYutaka into src/win32 and commit the patch for README.NT\nfor current tree.\n\nRegards.\n\nHiroshi Inoue\[email protected]",
"msg_date": "Thu, 23 Sep 1999 19:21:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] IPC on win32 - additions for 6.5.2 and current trees"
},
{
"msg_contents": "> Hi,\n> \n> please add the file ipc.patch (patch for the cygipc library) into src/win32\n> directory and apply the patch for README.NT (readme.patch). I think it\n> should go into both the 6.5.2 and current trees.\n> \n> I have no reaction from the author of the cygipc library yet, so it will be\n> better to include the patch into the sources of PostgreSQL\n> \n> \t\t\tDan\n> \n> \n\n\nI am attaching our current README.NT file. I have done some cleanups\nand appended the patch to the README file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nFrom: \"Joost Kraaijeveld\" <[email protected]>\nTo: \"Pgsql-Ports@Postgresql. Org\" <[email protected]>\nSubject: RE: [PORTS] Re: psql under win32\nDate: Wed, 21 Apr 1999 07:07:47 +0200\nMessage-ID: <[email protected]>\nMIME-Version: 1.0\n\nInstalling PostgreSQL on NT:\n\n---------------------------------------------------------------------------\n\nIt can be done by done by typing configure, make and make install.\n\n1. Install the Cygwin package\n2. Update to EGCS 1.1.2\n (This may be optional.)\n\n---------------------------------------------------------------------------\n\n\t\t\t\tOPTIONAL\n\n1. Install the Andy Piper Tools (http://www.xemacs.freeserve.co.uk/)\n (This may be optional.)\n\n---------------------------------------------------------------------------\n\n\t\t\t CYGWIN32 INSTALLATION\n\n1. Download the Cygwin32 IPC Package by Ludovic LANGE \n http://www.multione.capgemini.fr:80/tools/pack_ipc/current.tar.gz\n2. Untar the package and follow the readme instructions.\n3. Apply the patch from the file.\n4. I tested 1.03.\n5. I used the \\cygwin-b20\\h-i568-cygwin32\\i586-cygwin32\\lib and\n\\cygwin-b20\\h-i568-cygwin32\\i586-cygwin32\\include\\sys instead of the\n/usr/local/lib and usr/local/include/sys.\n\nNOTE:\nAlso, the cygnus-bindir has to be placed in the path before the\nNT-directories, because the sort.exe has to be taken for cygnus, not\nNT.\n\n---------------------------------------------------------------------------\n\n\t\t POSTGRESQL INSTALL WITH NT SPECIFICS\n\n1. Download the current version of PostgreSQL.\n2. Untar the package.\n3. Copy the files from \\pgsql\\src\\win32 according to the readme file.\n4. Edit \\pgsql\\src\\template\\cygwin32 if needed (I had to adjust the YFLAGS\npath).\n5. ./configure\n6. make\n7. create the directory /usr/local/pgsql manually: the mkdir cannot create a\ndirectory 2 levels deep in one step.\n8. make install\n9. cd /usr/lical/pgsql/doc\n10. make install\n11. Set the environmental data\n12. Initdb --username=jkr (do not run this command as administrator)\n\n13. Open a new Cygwin command prompt\n14. Start \"ipc-deamon&\" (background proces)\n15. Start \"postmaster -i 2>&1 > /tmp/postgres.log &\" (background proces)\n16. Start \"tail -f /tmp/postgres.log\" to see the messages\n\n17. cd /usr/src/pgsql/src/test/regress\n18. make all runtest\n\nAll test should be run, allthought the latest snapshot I tested (18-4)\nappears to have some problems with locking.\n\nNOTE:\nBy default, PostgreSQL clients like psql communicate using unix domain\nsockets, which don't work on NT. Start the postmaster with -i, and \nwhen connecting to the database from a client, set the PGHOST\nenvironment variable to 'localhost' or supply the hostname on the\ncommand line.\n\nJoost\n\n\n---------------------------------------------------------------------------\n\nFIX FOR POSTGRESQL FREEZING ON NT MACHINES - EVERYONE SHOULD APPLY THIS PATCH\n\n\nFrom: \"Hiroshi Inoue\" <[email protected]>\nTo: \"Horak Daniel\" <[email protected]>, \"'Tom Lane'\" <[email protected]>\nCc: <[email protected]>\nSubject: RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) \nDate: Wed, 18 Aug 1999 08:45:28 +0900\nMessage-ID: <[email protected]>\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: 7bit\nX-Priority: 3 (Normal)\nX-MSMail-Priority: Normal\nX-Mailer: Microsoft Outlook 8.5, Build 4.71.2173.0\nX-MimeOLE: Produced By Microsoft MimeOLE V4.72.2106.4\nIn-reply-to: <2E7F82FAC1FCD2118E1500A024B3BF907DED3F@exchange.mmp.plzen-city.cz>\nImportance: Normal\nSender: [email protected]\nPrecedence: bulk\nStatus: RO\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Horak Daniel\n> Sent: Tuesday, August 17, 1999 9:06 PM\n> To: 'Tom Lane'\n> Cc: '[email protected]'\n> Subject: RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )\n\nYutaka Tanida [[email protected]] and I have examined IPC\nlibrary.\n\nWe found that postmaster doesn't call exec() after fork() since v6.4.\n\nThe value of static/extern variables which cygipc library holds may\nbe different from their initial values when postmaster fork()s child\nbackend processes.\n\nI made the following patch for cygipc library on trial.\nThis patch was effective for Yutaka's test case.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** sem.c.orig\tTue Dec 01 00:16:25 1998\n--- sem.c\tTue Aug 17 13:22:06 1999\n***************\n*** 58,63 ****\n--- 58,78 ----\n static int\t\t GFirstSem\t = 0;\t\t/*PCPC*/\n static int\t\t GFdSem\t ;\t\t/*PCPC*/\n\n+ static pid_t\tGProcessId = 0;\n+\n+ static void\tinit_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstSem = 0;\n+ \t\tused_sems = used_semids = max_semid = 0;\n+ \t\tsem_seq = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n+\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n /************************************************************************/\n***************\n*** 77,82 ****\n--- 92,98 ----\n {\n int LRet ;\n\n+ \tinit_globals();\n if( GFirstSem == 0 )\n {\n \tif( IsGSemSemExist() )\n*** shm.c.orig\tTue Dec 01 01:04:57 1998\n--- shm.c\tTue Aug 17 13:22:27 1999\n***************\n*** 59,64 ****\n--- 59,81 ----\n static int\t\t GFirstShm\t = 0;\t\t/*PCPC*/\n static int\t\t GFdShm\t ;\t\t/*PCPC*/\n\n+ /*****************************************/\n+ /*\tInitialization of static variables */\n+ /*****************************************/\n+ static pid_t GProcessId = 0;\n+ static void init_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstShm = 0;\n+ \t\tshm_rss = shm_swp = max_shmid = 0;\n+ \t\tshm_seq = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n+\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des shm\t\t*/\n /************************************************************************/\n***************\n*** 82,87 ****\n--- 99,105 ----\n {\n int LRet ;\n\n+ init_globals();\n if( GFirstShm == 0 )\n {\n if( IsGSemShmExist() )\n*** msg.c.orig\tTue Dec 01 00:16:09 1998\n--- msg.c\tTue Aug 17 13:20:04 1999\n***************\n*** 57,62 ****\n--- 57,77 ----\n static int\t\t GFirstMsg\t = 0;\t\t/*PCPC*/\n static int\t\t GFdMsg\t ;\t\t/*PCPC*/\n\n+ /*****************************************/\n+ /*\tInitialization of static variables */\n+ /*****************************************/\n+ static pid_t GProcessId = 0;\n+ static void init_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstMsg = 0;\n+ \t\tmsgbytes = msghdrs = msg_seq = used_queues = max_msqid = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n /************************************************************************/\n***************\n*** 79,84 ****\n--- 94,100 ----\n {\n int LRet ;\n\n+ init_globals();\n if( GFirstMsg == 0 )\n {\n if( IsGSemMsgExist() )",
"msg_date": "Mon, 27 Sep 1999 15:55:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] IPC on win32 - additions for 6.5.2 and current trees"
}
] |
[
{
"msg_contents": "Hi Denny,\n\nI solved this problem (backend crashes when we delete a group without\nrevoking privileges) adding the group again with the same grosysid, revoking\nall privileges on all tables and deleting this group.\n\nBest Regards,\n\nRicardo Coelho.\n\n----- Original Message -----\nFrom: D Herssein <[email protected]>\nTo: Ricardo Coelho <[email protected]>\nSent: Monday, August 30, 1999 1:03 PM\nSubject: HELP Re: pg_group, etc..\n\n\n> I just read your post AFTER I sent the HELP request to the group.\n> I must have deleted the group/user in the wrong order while playing with\n> the db trying to learn how to gran group access to users.\n> How do I get myself back to normal?\n>\n>\n> --\n> Life is complicated. But the simpler alternatives are not very\n> desirable. (R' A. Kahn)\n>\n\n",
"msg_date": "Mon, 30 Aug 1999 14:54:35 -0300",
"msg_from": "\"Ricardo Coelho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HELP Re: pg_group, etc.."
}
] |
[
{
"msg_contents": "\ngrowing tired of the ever increasing \"User Unknown\" or \"Host Unreachable\"\nmessages, I've implemented 'bouncefilter2' on the majordomo lists, which\nacts as a sort of 'inbetween' agent to catch, record and act on these\nDSNs...\n\nThe goal is to reduce the overall processing required, and increase the\nresponsiveness of the lists by eliminating the queuing resulting from\nthese messages...\n\nLet me know if you notice any unusual problems...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 30 Aug 1999 14:55:44 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "bouncefilter added to majordomo lists ..."
}
] |
[
{
"msg_contents": "> and why not something like:\n>\n> CREATE FUNCTION first_word ( :x CHAR VARYING(1000) )\n> RETURNS CHAR VARYING(40)\n> LANGUAGE SQL\n> RETURN TRIM (SUBSTRING ((:X || ' ') FROM 1 FOR POSITION (' ' IN (:X || '')\n> )));\n>\n> as described in SQL/PSM (SQL Persistent Stored Modules) (see \"A Guide To The\n> SQL Standard\" apendix E)\n> instead of re-invent the wheel. ;)\n\nBecause the SQL parser has to parse the stored procedure to find the end. In\npostgres you can have embedded tcl and any arbitrary language, so it is quite\ndifficult to handle that properly.\n\njochen\n",
"msg_date": "Mon, 30 Aug 1999 22:54:30 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Quoting in stored procedures"
}
] |
[
{
"msg_contents": "I have had a request to add multi-byte support to the Debian binary\npackages of PostgreSQL.\n\nSince I live in England, I have personally no need of this and therefore\nhave little understanding of the implications.\n\nIf I change the packages to use multi-byte support, (UNICODE (UTF-8) is\nsuggested as the default), will there be any detrimental effects on the\nfairly large parts of the world that don't need it? Should I try to\nprovide two different packages, one with and one without MB support?\n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For what shall it profit a man, if he shall gain the \n whole world, and lose his own soul?\" Mark 8:36 \n\n\n",
"msg_date": "Mon, 30 Aug 1999 22:19:57 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implications of multi-byte support in a distribution"
},
{
"msg_contents": "> I have had a request to add multi-byte support to the Debian binary\n> packages of PostgreSQL.\n> Since I live in England, I have personally no need of this and therefore\n> have little understanding of the implications.\n> If I change the packages to use multi-byte support, (UNICODE (UTF-8) is\n> suggested as the default), will there be any detrimental effects on the\n> fairly large parts of the world that don't need it? Should I try to\n> provide two different packages, one with and one without MB support?\n\nProbably. The downside to having MB support is reduced performance and\nperhaps functionality. If you don't need it, don't build it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 31 Aug 1999 07:04:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "On Mon, 30 Aug 1999, Oliver Elphick wrote:\n> I have had a request to add multi-byte support to the Debian binary\n> packages of PostgreSQL.\n> \n> Since I live in England, I have personally no need of this and therefore\n> have little understanding of the implications.\n> \n> If I change the packages to use multi-byte support, (UNICODE (UTF-8) is\n\n I consider Unicode as a compromise, and as such, it is the worst case. I\ndon't know anyone who need Unicode directly. Russian users need koi8 and\nwin1251, Chineese, Japaneese and other folks need their apropriate\nencodings (BIG5 and all that).\n Don't know what should be reasonable default; in any case installation\nscript should ask about user preference and run initdb -E with user\nencoding to set default.\n\n> suggested as the default), will there be any detrimental effects on the\n> fairly large parts of the world that don't need it? Should I try to\n> provide two different packages, one with and one without MB support?\n\n But of course. Many people do not want MB support out of distributive.\nSuspicious sysadmin should reject such package, if (s)he do not understand\nwhat/where/why MB - and it is right.\n Suporting two different packages is hard, but support only MB-enabled\npackage will led to many demands \"please provide smaller/better/faster\nPostgreSQL package\".\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Tue, 31 Aug 1999 12:36:04 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": ">> I have had a request to add multi-byte support to the Debian binary\n>> packages of PostgreSQL.\n>> Since I live in England, I have personally no need of this and therefore\n>> have little understanding of the implications.\n>> If I change the packages to use multi-byte support, (UNICODE (UTF-8) is\n>> suggested as the default), will there be any detrimental effects on the\n>> fairly large parts of the world that don't need it? Should I try to\n>> provide two different packages, one with and one without MB support?\n>\n>Probably. The downside to having MB support is reduced performance and\n>perhaps functionality. If you don't need it, don't build it...\n\nNot really. I did the regression test with/without multi-byte enabled.\n\nwith MB:\t2:53:92 elapsed\nw/o MB:\t\t2:52.92 elapsed\n\nPerhaps the worst case for MB would be regex ops. If you do a lot of\nregex queries, performance degration might not be neglectable.\n\nLoad module size:\n\nwith MB:\t1208542\nw/o MB:\t\t1190925\n\n(difference is 17KB)\n\nTalking about the functionality, I don't see any missing feature with\nMB comparing w/o MB. (there are some features only MB has. for\nexample, SET NAMES).\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 31 Aug 1999 18:29:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution "
},
{
"msg_contents": "On Tue, 31 Aug 1999, Tatsuo Ishii wrote:\n\n> Date: Tue, 31 Aug 1999 18:29:21 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Thomas Lockhart <[email protected]>\n> Cc: Oliver Elphick <[email protected]>, [email protected],\n> [email protected]\n> Subject: Re: [HACKERS] Implications of multi-byte support in a distribution \n> \n> >> I have had a request to add multi-byte support to the Debian binary\n> >> packages of PostgreSQL.\n> >> Since I live in England, I have personally no need of this and therefore\n> >> have little understanding of the implications.\n> >> If I change the packages to use multi-byte support, (UNICODE (UTF-8) is\n> >> suggested as the default), will there be any detrimental effects on the\n> >> fairly large parts of the world that don't need it? Should I try to\n> >> provide two different packages, one with and one without MB support?\n> >\n> >Probably. The downside to having MB support is reduced performance and\n> >perhaps functionality. If you don't need it, don't build it...\n> \n> Not really. I did the regression test with/without multi-byte enabled.\n> \n> with MB:\t2:53:92 elapsed\n> w/o MB:\t\t2:52.92 elapsed\n> \n> Perhaps the worst case for MB would be regex ops. If you do a lot of\n> regex queries, performance degration might not be neglectable.\n\nIt should be. What would be nice is to have a column-specific\nMB support. But I doubt if it's possible.\n\n> \n> Load module size:\n> \n> with MB:\t1208542\n> w/o MB:\t\t1190925\n> \n> (difference is 17KB)\n> \n> Talking about the functionality, I don't see any missing feature with\n> MB comparing w/o MB. (there are some features only MB has. for\n> example, SET NAMES).\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 31 Aug 1999 15:56:34 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution "
},
{
"msg_contents": ">> Perhaps the worst case for MB would be regex ops. If you do a lot of\n>> regex queries, performance degration might not be neglectable.\n>\n>It should be. What would be nice is to have a column-specific\n>MB support. But I doubt if it's possible.\n\nThat shouldn't be too difficult, if we have an encoding infomation\nwith each text column or literal. Maybe now is the time to introuce\nNCHAR?\n\nBTW, it is interesting that people does not hesitate to enable\nwith-locale option even if they only use ASCII. I guess the\nperformance degration by enabling locale is not too small.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 01 Sep 1999 11:30:44 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution "
},
{
"msg_contents": "> That shouldn't be too difficult, if we have an encoding infomation\n> with each text column or literal. Maybe now is the time to introuce\n> NCHAR?\n\nI've been waiting for a go-ahead from folks who would use it. imho the\nway to do it is to use Postgres' type system to implement it, rather\nthan, for example, encoding \"type\" information into each string. We\ncan also define a \"default encoding\" for each database as a new column\nin pg_database...\n\n> BTW, it is interesting that people does not hesitate to enable\n> with-locale option even if they only use ASCII. I guess the\n> performance degration by enabling locale is not too small.\n\nRed Hat built their RPMs with locale enabled, and there is a\nsignificant performance hit. Implementing NCHAR would be a better\nsolution, since the user can choose whether to use SQL_TEXT or the\nlocale-specific character set at run time...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 01 Sep 1999 02:55:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "On Wed, 1 Sep 1999, Thomas Lockhart wrote:\n\n> Date: Wed, 01 Sep 1999 02:55:48 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: [email protected]\n> Cc: Oleg Bartunov <[email protected]>, Oliver Elphick <[email protected]>,\n> [email protected], [email protected]\n> Subject: Re: [HACKERS] Implications of multi-byte support in a distribution\n> \n> > That shouldn't be too difficult, if we have an encoding infomation\n> > with each text column or literal. Maybe now is the time to introuce\n> > NCHAR?\n\nYes, postgres after 6.5 and especially recent win becomes very popular\nand additional performance hit would be very in time. Does implementing\nof NCHAR only could solve all problem with text, varchar etc ?\n\n> \n> I've been waiting for a go-ahead from folks who would use it. imho the\n> way to do it is to use Postgres' type system to implement it, rather\n> than, for example, encoding \"type\" information into each string. We\n> can also define a \"default encoding\" for each database as a new column\n> in pg_database...\n\ngo-ahead, Tom :-) I would use it.\n\n\n> \n> > BTW, it is interesting that people does not hesitate to enable\n> > with-locale option even if they only use ASCII. I guess the\n> > performance degration by enabling locale is not too small.\n> \n> Red Hat built their RPMs with locale enabled, and there is a\n> significant performance hit. Implementing NCHAR would be a better\n> solution, since the user can choose whether to use SQL_TEXT or the\n> locale-specific character set at run time...\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> ************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 1 Sep 1999 09:48:59 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": ">>>>> \"TL\" == Thomas Lockhart <[email protected]> writes:\n\n >> That shouldn't be too difficult, if we have an encoding\n >> infomation with each text column or literal. Maybe now is the\n >> time to introuce NCHAR?\n\n TL> I've been waiting for a go-ahead from folks who would use\n TL> it. imho the way to do it is to use Postgres' type system to\n TL> implement it, rather than, for example, encoding \"type\"\n TL> information into each string. We can also define a \"default\n TL> encoding\" for each database as a new column in pg_database...\n\nWhat about sorting? Would it be possible to solve it in similar way?\nIf I'm not mistaken, there is currently no good way to use two different\nkinds of sorting for one postmaster instance?\n\nMilan Zamazal\n",
"msg_date": "01 Sep 1999 13:12:49 +0200",
"msg_from": "Milan Zamazal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": ">>>>> \"TL\" == Thomas Lockhart <[email protected]> writes:\n\n >> That shouldn't be too difficult, if we have an encoding\n >> infomation with each text column or literal. Maybe now is the\n >> time to introuce NCHAR?\n\n TL> I've been waiting for a go-ahead from folks who would use\n TL> it. imho the way to do it is to use Postgres' type system to\n TL> implement it, rather than, for example, encoding \"type\"\n TL> information into each string. We can also define a \"default\n TL> encoding\" for each database as a new column in pg_database...\n\nWhat about sorting? Would it be possible to solve it in similar way?\nIf I'm not mistaken, there is currently no good way to use two different\nkinds of sorting for one postmaster instance?\n\nMilan Zamazal\n",
"msg_date": "01 Sep 1999 13:17:45 +0200",
"msg_from": "Milan Zamazal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "> >> That shouldn't be too difficult, if we have an encoding\n> >> infomation with each text column or literal. Maybe now is the\n> >> time to introuce NCHAR?\n> TL> I've been waiting for a go-ahead from folks who would use\n> TL> it. imho the way to do it is to use Postgres' type system to\n> TL> implement it, rather than, for example, encoding \"type\"\n> TL> information into each string. We can also define a \"default\n> TL> encoding\" for each database as a new column in pg_database...\n> What about sorting? Would it be possible to solve it in similar way?\n> If I'm not mistaken, there is currently no good way to use two different\n> kinds of sorting for one postmaster instance?\n\nEach encoding/character set can behave however you want. You can reuse\ncollation and sorting code from another character set, or define a\nunique one.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Sep 1999 05:25:01 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > >> That shouldn't be too difficult, if we have an encoding\n> > >> infomation with each text column or literal. Maybe now is the\n> > >> time to introuce NCHAR?\n> > TL> I've been waiting for a go-ahead from folks who would use\n> > TL> it. imho the way to do it is to use Postgres' type system to\n> > TL> implement it, rather than, for example, encoding \"type\"\n> > TL> information into each string. We can also define a \"default\n> > TL> encoding\" for each database as a new column in pg_database...\n> > What about sorting? Would it be possible to solve it in similar way?\n> > If I'm not mistaken, there is currently no good way to use two different\n> > kinds of sorting for one postmaster instance?\n> \n> Each encoding/character set can behave however you want. You can reuse\n> collation and sorting code from another character set, or define a\n> unique one.\n\nIs it really inside one postmaster instance ?\n\nIf so, then is the character encoding defined at the create table /\ncreate index \nprocess (maybe even separately for each field ?) or can I specify it\nwhen sort'ing ?\n\n-----------------\nHannu\n",
"msg_date": "Thu, 02 Sep 1999 09:52:27 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "> > Each encoding/character set can behave however you want. You can reuse\n> > collation and sorting code from another character set, or define a\n> > unique one.\n> Is it really inside one postmaster instance ?\n> If so, then is the character encoding defined at the create table /\n> create index process (maybe even separately for each field ?) or can I \n> specify it when sort'ing ?\n\nYes, yes, and yes ;)\n\nI would propose that we implement the explicit collation features of\nSQL92 using implicit type conversion. So if you want to use a\ndifferent sorting order on a *compatible* character set, then (looking\nup in Date and Darwen for the syntax...):\n\n 'test string' COLLATE CASE_INSENSITIVITY\n\nbecomes internally\n\n case_insensitivity('test string'::text)\n\nand\n\n c1 < c2 COLLATE CASE_INSENSITIVITY\n\nbecomes\n\n case_insensitivity(c1) < case_insensitivity(c2)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Sep 1999 15:03:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
},
{
"msg_contents": "> > > Each encoding/character set can behave however you want. You can reuse\n> > > collation and sorting code from another character set, or define a\n> > > unique one.\n> > Is it really inside one postmaster instance ?\n> > If so, then is the character encoding defined at the create table /\n> > create index process (maybe even separately for each field ?) or can I \n> > specify it when sort'ing ?\n> \n> Yes, yes, and yes ;)\n\nBut we can't avoid calling strcoll() and some other codes surrounded\nby #ifdef LOCALE? I think he actually wants is to define his own\ncollation *and* not to use locale if the column is ASCII only.\n\n> I would propose that we implement the explicit collation features of\n> SQL92 using implicit type conversion. So if you want to use a\n> different sorting order on a *compatible* character set, then (looking\n> up in Date and Darwen for the syntax...):\n> \n> 'test string' COLLATE CASE_INSENSITIVITY\n> \n> becomes internally\n> \n> case_insensitivity('test string'::text)\n> \n> and\n> \n> c1 < c2 COLLATE CASE_INSENSITIVITY\n> \n> becomes\n> \n> case_insensitivity(c1) < case_insensitivity(c2)\n\nThis idea seems great and elegant. Ok, what about throwing away #ifdef\nLOCALE? Same thing can be obtained by defining a special callation\nLOCALE_AWARE. This seems much more consistent for me. Or even better,\nwe could explicitly have predefined COLLATION for each language (these\ncan be automatically generated from existing locale data). This would\navoid some platform specific locale problems.\n---\nTatsuo Ishii\n",
"msg_date": "Fri, 03 Sep 1999 09:55:17 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution "
},
{
"msg_contents": "> But we can't avoid calling strcoll() and some other codes surrounded\n> by #ifdef LOCALE? I think he actually wants is to define his own\n> collation *and* not to use locale if the column is ASCII only.\n\nRight. But there would be a fundamental character type which is *not*\nlocale-aware, and there is another type (perhaps/probably NCHAR?)\nwhich is...\n\n> Ok, what about throwing away #ifdef\n> LOCALE? Same thing can be obtained by defining a special callation\n> LOCALE_AWARE.\n\nOr moving the locale-aware stuff to a formal NCHAR implementation.\nistm (and to Date and Darwen ;) that there is a tighter relationship\nbetween collations, character repertoires, and character sets than\nmight be inferred from the SQL92-defined capabilities.\n\n> This seems much more consistent for me. Or even better,\n> we could explicitly have predefined COLLATION for each language (these\n> can be automatically generated from existing locale data). This would\n> avoid some platform specific locale problems.\n\nRight. We may already have some of this with the \"implicit type\ncoersion\" conventions I introduced in the v6.4 release.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Sep 1999 01:45:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Implications of multi-byte support in a distribution"
}
] |
[
{
"msg_contents": "Has anyone seen a problem with postgresql-6.5.1 leaking file descriptors? I\nam not sure what exact activity causes the problem, but e.g. using pgaccess\nto inspect the database causes the following:\n\ncr@photox% ps ax|grep postgres\n 425 ?? Ss 1:58.01 /usr/local/pgsql/bin/postmaster -i -S -o -F (postgres\n78404 ?? I 0:00.96 /usr/local/pgsql/bin/postgres cr 127.0.0.1 cr idle\n\ncr@photox% fstat -p 78404\nUSER CMD PID FD MOUNT INUM MODE SZ|DV R/W\npgsql postgres 78404 root / 2 drwxr-xr-x 512 r\npgsql postgres 78404 wd /usr 389050 drwx------ 4096 r\npgsql postgres 78404 text /usr 334856 -r-xr-xr-x 1050936 r\npgsql postgres 78404 0 / 967 crw-rw-rw- null rw\npgsql postgres 78404 1 / 967 crw-rw-rw- null rw\npgsql postgres 78404 2 / 967 crw-rw-rw- null rw\npgsql postgres 78404 3 /usr 366283 -rw------- 245760 rw\npgsql postgres 78404 4 /usr 389846 -rw------- 24576 rw\npgsql postgres 78404 5* internet stream tcp ca0ba960\npgsql postgres 78404 6 /usr 389850 -rw------- 139264 rw\npgsql postgres 78404 7 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 8 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 9 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 10 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 11 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 12 /usr 389847 -rw------- 147456 rw\npgsql postgres 78404 13 /usr 389844 -rw------- 40960 rw\npgsql postgres 78404 14 /usr 389845 -rw------- 16384 rw\npgsql postgres 78404 15 /usr 366236 -rw------- 8192 rw\npgsql postgres 78404 16 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 17 /usr 389823 -rw------- 24576 rw\npgsql postgres 78404 18 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 19 /usr 389817 -rw------- 24576 rw\npgsql postgres 78404 20 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 21 /usr 389857 -rw------- 8192 rw\npgsql postgres 78404 22 /usr 389829 -rw------- 172032 rw\npgsql postgres 78404 23 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 24 /usr 389919 -rw------- 0 rw\npgsql postgres 78404 25 /usr 366280 -rw------- 8192 rw\npgsql postgres 78404 26 /usr 389821 -rw------- 32768 rw\npgsql postgres 78404 27 /usr 389827 -rw------- 139264 rw\npgsql postgres 78404 28 /usr 389826 -rw------- 57344 rw\npgsql postgres 78404 29 /usr 390476 -rw------- 8192 rw\npgsql postgres 78404 30 /usr 390488 -rw------- 8192 rw\npgsql postgres 78404 31 /usr 390176 -rw------- 8192 rw\npgsql postgres 78404 32 /usr 390547 -rw------- 8192 rw\npgsql postgres 78404 33 /usr 389913 -rw------- 8192 rw\npgsql postgres 78404 34 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 35 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 36 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 37 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 38 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 39 /usr 390217 -rw------- 8192 rw\npgsql postgres 78404 40 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 41 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 42 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 43 /usr 390218 -rw------- 16384 rw\npgsql postgres 78404 44 /usr 390234 -rw------- 16384 rw\npgsql postgres 78404 45 /usr 390255 -rw------- 16384 rw\npgsql postgres 78404 46 /usr 390276 -rw------- 16384 rw\npgsql postgres 78404 47 /usr 390297 -rw------- 32768 rw\npgsql postgres 78404 48 /usr 389678 -rw------- 32768 rw\npgsql postgres 78404 49 /usr 389823 -rw------- 24576 rw\npgsql postgres 78404 50 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 51 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 52 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 53 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 54 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 55 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 56 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 57 /usr 389857 -rw------- 8192 rw\npgsql postgres 78404 58 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 59 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 60 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 61 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 62 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 63 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 64 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 65 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 66 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 67 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 68 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 69 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 70 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 71 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 72 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 73 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 74 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 75 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 76 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 77 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 78 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 79 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 80 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 81 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 82 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 83 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 84 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 85 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 86 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 87 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 88 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 89 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 90 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 91 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 92 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 93 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 94 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 95 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 96 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 97 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 98 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 99 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 100 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 101 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 102 /usr 389913 -rw------- 8192 rw\npgsql postgres 78404 103 /usr 389832 -rw------- 0 rw\n\n\ncr@photox% foreach i (389855 389854 389848 389841 389830 390360)\n? echo ${i}: `ls -i /usr/local/pgsql/data/base/cr | grep $i`\n? echo ${i}: `fstat -p 78404 |grep $i | wc -l`\n? end\n389855: 389855 pg_amop\n389855: 3\n389854: 389854 pg_amproc\n389854: 3\n389848: 389848 pg_attribute_relid_attnam_index\n389848: 3\n389841: 389841 pg_index\n389841: 3\n389830: 389830 pg_operator\n389830: 3\n390360: 390360 users\n390360: 3\n\n\nWhat I see is that the backend opens many of the files associated with tables\n(both user and system) many times. It is not so bad in this example with\npgaccess, but for long running processes I have backends with each table file\nopen dozens of times, consuming hundreds of file descriptors per hour of\nrun-time.\n\nThis is on FreeBSD 3.2-STABLE, using the 6.5.1 release of postgresql from the\nports collection.\n\nAnyway, is it a known problem? Aside from being careful to not use a\nparticular copy of the backend too long, are there any fixes?\n\nThanks,\n\nCyrus Rahman\n",
"msg_date": "Mon, 30 Aug 1999 18:03:01 -0400 (EDT)",
"msg_from": "Cyrus Rahman <[email protected]>",
"msg_from_op": true,
"msg_subject": "File descriptor leakage?"
}
] |
[
{
"msg_contents": "Has anyone seen a problem with postgresql-6.5.1 leaking file descriptors? I\nam not sure what exact activity causes the problem, but e.g. using pgaccess\nto inspect the database causes the following:\n\ncr@photox% ps ax|grep postgres\n 425 ?? Ss 1:58.01 /usr/local/pgsql/bin/postmaster -i -S -o -F (postgres\n78404 ?? I 0:00.96 /usr/local/pgsql/bin/postgres cr 127.0.0.1 cr idle\n\ncr@photox% fstat -p 78404\nUSER CMD PID FD MOUNT INUM MODE SZ|DV R/W\npgsql postgres 78404 root / 2 drwxr-xr-x 512 r\npgsql postgres 78404 wd /usr 389050 drwx------ 4096 r\npgsql postgres 78404 text /usr 334856 -r-xr-xr-x 1050936 r\npgsql postgres 78404 0 / 967 crw-rw-rw- null rw\npgsql postgres 78404 1 / 967 crw-rw-rw- null rw\npgsql postgres 78404 2 / 967 crw-rw-rw- null rw\npgsql postgres 78404 3 /usr 366283 -rw------- 245760 rw\npgsql postgres 78404 4 /usr 389846 -rw------- 24576 rw\npgsql postgres 78404 5* internet stream tcp ca0ba960\npgsql postgres 78404 6 /usr 389850 -rw------- 139264 rw\npgsql postgres 78404 7 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 8 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 9 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 10 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 11 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 12 /usr 389847 -rw------- 147456 rw\npgsql postgres 78404 13 /usr 389844 -rw------- 40960 rw\npgsql postgres 78404 14 /usr 389845 -rw------- 16384 rw\npgsql postgres 78404 15 /usr 366236 -rw------- 8192 rw\npgsql postgres 78404 16 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 17 /usr 389823 -rw------- 24576 rw\npgsql postgres 78404 18 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 19 /usr 389817 -rw------- 24576 rw\npgsql postgres 78404 20 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 21 /usr 389857 -rw------- 8192 rw\npgsql postgres 78404 22 /usr 389829 -rw------- 172032 rw\npgsql postgres 78404 23 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 24 /usr 389919 -rw------- 0 rw\npgsql postgres 78404 25 /usr 366280 -rw------- 8192 rw\npgsql postgres 78404 26 /usr 389821 -rw------- 32768 rw\npgsql postgres 78404 27 /usr 389827 -rw------- 139264 rw\npgsql postgres 78404 28 /usr 389826 -rw------- 57344 rw\npgsql postgres 78404 29 /usr 390476 -rw------- 8192 rw\npgsql postgres 78404 30 /usr 390488 -rw------- 8192 rw\npgsql postgres 78404 31 /usr 390176 -rw------- 8192 rw\npgsql postgres 78404 32 /usr 390547 -rw------- 8192 rw\npgsql postgres 78404 33 /usr 389913 -rw------- 8192 rw\npgsql postgres 78404 34 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 35 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 36 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 37 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 38 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 39 /usr 390217 -rw------- 8192 rw\npgsql postgres 78404 40 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 41 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 42 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 43 /usr 390218 -rw------- 16384 rw\npgsql postgres 78404 44 /usr 390234 -rw------- 16384 rw\npgsql postgres 78404 45 /usr 390255 -rw------- 16384 rw\npgsql postgres 78404 46 /usr 390276 -rw------- 16384 rw\npgsql postgres 78404 47 /usr 390297 -rw------- 32768 rw\npgsql postgres 78404 48 /usr 389678 -rw------- 32768 rw\npgsql postgres 78404 49 /usr 389823 -rw------- 24576 rw\npgsql postgres 78404 50 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 51 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 52 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 53 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 54 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 55 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 56 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 57 /usr 389857 -rw------- 8192 rw\npgsql postgres 78404 58 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 59 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 60 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 61 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 62 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 63 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 64 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 65 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 66 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 67 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 68 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 69 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 70 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 71 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 72 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 73 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 74 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 75 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 76 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 77 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 78 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 79 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 80 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 81 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 82 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 83 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 84 /usr 389832 -rw------- 0 rw\npgsql postgres 78404 85 /usr 389929 -rw------- 8192 rw\npgsql postgres 78404 86 /usr 389856 -rw------- 8192 rw\npgsql postgres 78404 87 /usr 389841 -rw------- 16384 rw\npgsql postgres 78404 88 /usr 389854 -rw------- 8192 rw\npgsql postgres 78404 89 /usr 389855 -rw------- 16384 rw\npgsql postgres 78404 90 /usr 389830 -rw------- 65536 rw\npgsql postgres 78404 91 /usr 389848 -rw------- 385024 rw\npgsql postgres 78404 92 /usr 389815 -rw------- 16384 rw\npgsql postgres 78404 93 /usr 366281 -rw------- 8192 rw\npgsql postgres 78404 94 /usr 389828 -rw------- 40960 rw\npgsql postgres 78404 95 /usr 389853 -rw------- 16384 rw\npgsql postgres 78404 96 /usr 389852 -rw------- 16384 rw\npgsql postgres 78404 97 /usr 389819 -rw------- 8192 rw\npgsql postgres 78404 98 /usr 389818 -rw------- 16384 rw\npgsql postgres 78404 99 /usr 390360 -rw------- 8192 rw\npgsql postgres 78404 100 /usr 390361 -rw------- 16384 rw\npgsql postgres 78404 101 /usr 390379 -rw------- 32768 rw\npgsql postgres 78404 102 /usr 389913 -rw------- 8192 rw\npgsql postgres 78404 103 /usr 389832 -rw------- 0 rw\n\n\ncr@photox% foreach i (389855 389854 389848 389841 389830 390360)\n? echo ${i}: `ls -i /usr/local/pgsql/data/base/cr | grep $i`\n? echo ${i}: `fstat -p 78404 |grep $i | wc -l`\n? end\n389855: 389855 pg_amop\n389855: 3\n389854: 389854 pg_amproc\n389854: 3\n389848: 389848 pg_attribute_relid_attnam_index\n389848: 3\n389841: 389841 pg_index\n389841: 3\n389830: 389830 pg_operator\n389830: 3\n390360: 390360 users\n390360: 3\n\n\nWhat I see is that the backend opens many of the files associated with tables\n(both user and system) many times. It is not so bad in this example with\npgaccess, but for long running processes I have backends with each table file\nopen dozens of times, consuming hundreds of file descriptors per hour of\nrun-time.\n\nThis is on FreeBSD 3.2-STABLE, using the 6.5.1 release of postgresql from the\nports collection.\n\nAnyway, is it a known problem? Aside from being careful to not use a\nparticular instance of the backend too long, are there any fixes?\n\nThanks,\n\nCyrus Rahman\n",
"msg_date": "Mon, 30 Aug 1999 20:03:15 -0400 (EDT)",
"msg_from": "Cyrus Rahman <[email protected]>",
"msg_from_op": true,
"msg_subject": "File descriptor leakage?"
},
{
"msg_contents": "Cyrus Rahman <[email protected]> writes:\n> Has anyone seen a problem with postgresql-6.5.1 leaking file\n> descriptors?\n\nThat's interesting, I thought I'd fixed all the file-descriptor-leakage\nproblems. Guess not :-(\n\nIn addition to the files you list, there seem to be a whole bunch of\ndescriptors for 389832; can you find out what that is? (Check in\nthe top-level data directory as well as data/base/xxx.)\n\nCan you generate a repeatable script that causes a particular file\nto be opened more than once? This is going to be tough to track\ndown without a test case...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 1999 21:50:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, August 31, 1999 10:50 AM\n> To: Cyrus Rahman\n> Cc: [email protected]\n> Subject: Re: [HACKERS] File descriptor leakage? \n> \n> \n> Cyrus Rahman <[email protected]> writes:\n> > Has anyone seen a problem with postgresql-6.5.1 leaking file\n> > descriptors?\n> \n> That's interesting, I thought I'd fixed all the file-descriptor-leakage\n> problems. Guess not :-(\n> \n\nThe following may be one of the cause.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Monday, June 07, 1999 7:49 PM\n> To: Hiroshi Inoue\n> Cc: The Hermit Hacker; [email protected]\n> Subject: Re: [HACKERS] postgresql-v6.5beta2.tar.gz ...\n>\n\n[snip] \n\n> \n> 1. bug in cache invalidation code: when we invalidate relcache\n> we forget to free MdfdVec in md.c!\n> \n> Vacuum invalidates a relation tuple in pg_class and concurrent\n> xactions invalidate corresponding relcache entry, but don't\n> free MdfdVec and so allocate new one for the same relation\n> more and more. Each MdfdVed requires own fd.c:Vfd entry -> below\n> \n> 2. fd.c:pg_nofile()->sysconf(_SC_OPEN_MAX) returns in FreeBSD \n> near total number of files that can be opened in system\n> (by _all_ users/procs). With total number of opened files\n> ~ 2000 I can run your test with 10-20 simultaneous\n> xactions for very short time, -:)\n> \n> Should we limit fd.c:no_files to ~ 256?\n> This is port-specific, of course...\n\nI posted a patch about a month ago([HACKERS] double opens).\nBut yutaka tanida [[email protected]] reported a bug caused \nby the patch. I found it's because of calling smgrclose() after \nsmgrclose()/smgrunlink() for the same relation.\n\nIt seems my old patch has not been appiled yet.\nHere is a new patch.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** utils/cache/relcache.c.orig\tMon Jul 26 12:45:15 1999\n--- utils/cache/relcache.c\tMon Aug 30 15:37:10 1999\n***************\n*** 1259,1264 ****\n--- 1259,1265 ----\n \n \t\toldcxt = MemoryContextSwitchTo((MemoryContext) CacheCxt);\n \n+ \t\tsmgrclose(DEFAULT_SMGR, relation);\n \t\tRelationCacheDelete(relation);\n \n \t\tFreeTupleDesc(relation->rd_att);\n*** storage/smgr/md.c.orig\tMon Jul 26 12:45:09 1999\n--- storage/smgr/md.c\tTue Aug 31 13:44:28 1999\n***************\n*** 190,195 ****\n--- 190,197 ----\n \n \t/* finally, clean out the mdfd vector */\n \tfd = RelationGetFile(reln);\n+ \tif (fd < 0)\n+ \t\treturn SM_SUCCESS;\n \tMd_fdvec[fd].mdfd_flags = (uint16) 0;\n \n \toldcxt = MemoryContextSwitchTo(MdCxt);\n***************\n*** 211,216 ****\n--- 213,219 ----\n \tMemoryContextSwitchTo(oldcxt);\n \n \t_fdvec_free(fd);\n+ \treln->rd_fd = -1;\n \n \treturn SM_SUCCESS;\n }\n***************\n*** 319,324 ****\n--- 322,329 ----\n \tMemoryContext oldcxt;\n \n \tfd = RelationGetFile(reln);\n+ \tif (fd < 0)\n+ \t\treturn SM_SUCCESS;\n \n \toldcxt = MemoryContextSwitchTo(MdCxt);\n #ifndef LET_OS_MANAGE_FILESIZE\n***************\n*** 370,375 ****\n--- 375,381 ----\n \tMemoryContextSwitchTo(oldcxt);\n \n \t_fdvec_free(fd);\n+ \treln->rd_fd = -1;\n \n \treturn SM_SUCCESS;\n }\n***************\n*** 895,900 ****\n--- 901,907 ----\n {\n \n \tAssert(Md_Free < 0 || Md_fdvec[Md_Free].mdfd_flags == MDFD_FREE);\n+ \tAssert(Md_fdvec[fdvec].mdfd_flags != MDFD_FREE);\n \tMd_fdvec[fdvec].mdfd_nextFree = Md_Free;\n \tMd_fdvec[fdvec].mdfd_flags = MDFD_FREE;\n \tMd_Free = fdvec;\n\n\n",
"msg_date": "Tue, 31 Aug 1999 13:49:32 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] File descriptor leakage? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I posted a patch about a month ago([HACKERS] double opens).\n> But yutaka tanida [[email protected]] reported a bug caused \n> by the patch. I found it's because of calling smgrclose() after \n> smgrclose()/smgrunlink() for the same relation.\n> It seems my old patch has not been appiled yet.\n> Here is a new patch.\n\nI think we ought to hold up 6.5.2 long enough to cram this patch in, but\nI'm hesitant to stick it in the stable branch without some more testing.\nCyrus, can you try it and see if it fixes your problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 09:51:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
},
{
"msg_contents": "On Tue, 31 Aug 1999, Tom Lane wrote:\n\n> I think we ought to hold up 6.5.2 long enough to cram this patch in, but\n\nLet me know when you are ready then...the only one that I want to keep to\na relatively fixed date on (or as close to one as possible) are the minor\nreleases (6.5, 6.6, etc)...the minor-minor releases I have no problems\nwith shifting around as is required...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 31 Aug 1999 12:22:31 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
}
] |
[
{
"msg_contents": "> Modified Files:\n> parse_node.h parse_oper.h\n> Remove bogus code in oper_exact --- if it didn't find an exact\n> match then it tried for a self-commutative operator with the reversed input\n> data types. This is pretty silly; there could never be such an operator,\n> except maybe in binary-compatible-type scenarios, and we have oper_inexact\n> for that. Besides which, the oprsanity regress test would complain about\n> such an operator. Remove nonfunctional code and simplify routine calling\n> convention accordingly.\n\nOoh! That codes sounds familiar. What I was trying for was to cover\nthe case that, for example, (int4 < float4) was not implemented, but\nthat (float4 >= int4) was. If this is already handled elsewhere, or if\nthis goal is nonsensical, then cutting the defective code is the right\nthing. But if the code just needed repairing, we should put it back in\nand get it right next time...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 31 Aug 1999 05:11:06 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/include/parser (parse_node.h parse_oper.h)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Remove bogus code in oper_exact --- if it didn't find an exact\n>> match then it tried for a self-commutative operator with the reversed input\n>> data types. This is pretty silly;\n\n> Ooh! That codes sounds familiar. What I was trying for was to cover\n> the case that, for example, (int4 < float4) was not implemented, but\n> that (float4 >= int4) was. If this is already handled elsewhere, or if\n> this goal is nonsensical, then cutting the defective code is the right\n> thing. But if the code just needed repairing, we should put it back in\n> and get it right next time...\n\nWell, what it was actually looking for was not a commuted operator but\nthe *same* operator name with the reversed data types; and then\ndemanding that this operator link to itself as its own commutator.\nI don't believe such a case can ever arise in practice --- it certainly\ndoes not now, since the opr_sanity regress test would complain if it\ndid.\n\nI don't see any really good way for operator lookup to substitute\ncommutative operators, since it has only an operator name and not (yet)\nany pg_operator entry to check the commutator link of. Surely you don't\nwant to hardwire in knowledge that, say, '<' and '>=' are likely to be\nnames of commutators.\n\nIn any case, failing to provide a full set of commutable comparison\noperators will hobble the optimizer, so an implementor of a new data\ntype would be pretty foolish not to provide both operators. So I don't\nthink it's worth providing code in operator lookup to handle this\nscenario.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 09:46:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] pgsql/src/include/parser (parse_node.h\n\tparse_oper.h)"
}
] |
[
{
"msg_contents": "Hi,\n\nDaniel Horak wrote:\n\n> Hi,\n> \n> please add the file ipc.patch (patch for the cygipc library) into src/win32\n> directory and apply the patch for README.NT (readme.patch). I think it\n> should go into both the 6.5.2 and current trees.\n> \n> I have no reaction from the author of the cygipc library yet, so it will be\n> better to include the patch into the sources of PostgreSQL\n\nI propose more patch against cygipc. \n\nHiroshi Inoue ([email protected]) found another backend freezing problem.\nHe also found semop() in cygipc can't decrement semaphore value\ncorrectly (Only -1 is supported).\n\nI create follwing patch fixes these issues.\n\n\nI'm sorry for my poor English.\n\n*** sem.c.orig_\tTue Aug 17 14:19:37 1999\n--- sem.c\tTue Aug 31 16:59:49 1999\n***************\n*** 204,210 ****\n {\n \tCloseHandle ( LHandle ) ;\n }\n! LHandle = CreateSemaphore(NULL, 0, 0x7FFFFFFF, LBuff) ;\n if( LHandle == NULL )\n {\n \tprintf( \"Creation de Semaphore \\\"Sem\\\" impossible\\n\" ) ;\n--- 204,210 ----\n {\n \tCloseHandle ( LHandle ) ;\n }\n! LHandle = CreateSemaphore(NULL, 0, 1, LBuff) ;\n if( LHandle == NULL )\n {\n \tprintf( \"Creation de Semaphore \\\"Sem\\\" impossible\\n\" ) ;\n***************\n*** 374,388 ****\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\tCYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t }\n! \t\t ReleaseSemaphore(LHandle, sop->sem_op, &LVal) ;\n! \t \t shareadrsem->current_nb[id].current_nb[sop->sem_num] +=\n! \t\t\t\t\tsop->sem_op ;\n \t\t sem_deconnect() ;\n \t\t} else {\n \t\t if( sop->sem_flg == IPC_NOWAIT )\n \t\t {\n! \t\t\tLRet = WaitForSingleObject(LHandle, 0) ;\n! \t\t\tif( LRet == WAIT_TIMEOUT )\n \t\t\t{\n debug_printf(\"do_semop : return -EAGAIN\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EAGAIN) ;\n--- 374,387 ----\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\tCYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t }\n! \t shareadrsem->current_nb[id].current_nb[sop->sem_num] +=\n! \t\t\t\tsop->sem_op ;\n \t\t sem_deconnect() ;\n+ \t\t ReleaseSemaphore(LHandle, 1 , &LVal) ;\n \t\t} else {\n \t\t if( sop->sem_flg == IPC_NOWAIT )\n \t\t {\n! \t\t\tif( sop->sem_op + shareadrsem->current_nb[id].current_nb[sop->sem_num] <0 )\n \t\t\t{\n debug_printf(\"do_semop : return -EAGAIN\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EAGAIN) ;\n***************\n*** 392,407 ****\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t\t}\n! \t \t\tshareadrsem->current_nb[id].current_nb[sop->sem_num] -= 1 ;\n \t\t\tsem_deconnect() ;\n \t\t } else {\n! \t\t\tLRet = WaitForSingleObject(LHandle, INFINITE) ;\n \t\t\tif (sem_connect() == 0)\n \t\t\t{\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t\t}\n! \t\t\t shareadrsem->current_nb[id].current_nb[sop->sem_num] -= 1 ;\n \t\t\t sem_deconnect() ;\n \t\t }\n \t\t}\n--- 391,408 ----\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t\t}\n! \t \t\tshareadrsem->current_nb[id].current_nb[sop->sem_num] += sop->sem_op;\n \t\t\tsem_deconnect() ;\n \t\t } else {\n! \t\t while(sop->sem_op + shareadrsem->current_nb[id].current_nb[sop->sem_num] <0)\n! \t\t\t\tLRet = WaitForSingleObject(LHandle, INFINITE) ;\n! \t\t \n \t\t\tif (sem_connect() == 0)\n \t\t\t{\n debug_printf(\"do_semop : return -EACCES\\n\");\n \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n \t\t\t}\n! \t\t\t shareadrsem->current_nb[id].current_nb[sop->sem_num] += sop->sem_op ;\n \t\t\t sem_deconnect() ;\n \t\t }\n \t\t}\n***************\n*** 452,458 ****\n \tchar LBuff[100] ;\n \tHANDLE LHandle ;\n \tlong LPrevious ;\n- \tint LIndex;\n \n debug_printf(\"semctl : semid=%X semnum=%X cmd=0x%02X arg=%p\\n\",semid,semnum,cmd,arg);\n \tif (semid < 0 || semnum < 0 || cmd < 0)\n--- 453,458 ----\n***************\n*** 585,606 ****\n \t\tif( LHandle != NULL )\n \t\t{\n \t\t if( arg.val > shareadrsem->current_nb[id].current_nb[semnum] )\n! \t\t {\n! \t\t\tReleaseSemaphore(LHandle,\n! \t\t\targ.val-shareadrsem->current_nb[id].current_nb[semnum],\n! \t\t\t&LPrevious) ;\n! \t\t }\n! \t\t else if (arg.val <\n! \t\t shareadrsem->current_nb[id].current_nb[semnum] )\n! \t\t {\n! \t\t\tfor( LIndex = arg.val;\n! \t\t\tLIndex < shareadrsem->current_nb[id].current_nb[semnum];\n! \t\t\tLIndex++ )\n! \t\t\t{\n! \t\t\t WaitForSingleObject(LHandle, 0) ;\n! \t\t\t}\n! \t\t }\n! \t shareadrsem->current_nb[id].current_nb[semnum] = arg.val ;\n \t\t}\n debug_printf(\"semctl : SETVAL : return 0\\n\");\n \t\tCYGWIN32_IPCNT_RETURN_DECONNECT (0);\n--- 585,592 ----\n \t\tif( LHandle != NULL )\n \t\t{\n \t\t if( arg.val > shareadrsem->current_nb[id].current_nb[semnum] )\n! \t\t\t\tReleaseSemaphore(LHandle,1,&LPrevious) ;\n! shareadrsem->current_nb[id].current_nb[semnum] = arg.val ;\n \t\t}\n debug_printf(\"semctl : SETVAL : return 0\\n\");\n \t\tCYGWIN32_IPCNT_RETURN_DECONNECT (0);\n\n\n\n--\nYutaka tanida / S34 Co., Ltd.\[email protected] (Office)\[email protected](Private, or if you *HATE* Microsoft Outlook)\n\n",
"msg_date": "Tue, 31 Aug 1999 17:38:59 +0900",
"msg_from": "yutaka tanida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IPC on win32 - additions for 6.5.2 and current trees"
},
{
"msg_contents": "NT folks, I assume this patch is no longer needed.\n\n\n> Hi,\n> \n> Daniel Horak wrote:\n> \n> > Hi,\n> > \n> > please add the file ipc.patch (patch for the cygipc library) into src/win32\n> > directory and apply the patch for README.NT (readme.patch). I think it\n> > should go into both the 6.5.2 and current trees.\n> > \n> > I have no reaction from the author of the cygipc library yet, so it will be\n> > better to include the patch into the sources of PostgreSQL\n> \n> I propose more patch against cygipc. \n> \n> Hiroshi Inoue ([email protected]) found another backend freezing problem.\n> He also found semop() in cygipc can't decrement semaphore value\n> correctly (Only -1 is supported).\n> \n> I create follwing patch fixes these issues.\n> \n> \n> I'm sorry for my poor English.\n> \n> *** sem.c.orig_\tTue Aug 17 14:19:37 1999\n> --- sem.c\tTue Aug 31 16:59:49 1999\n> ***************\n> *** 204,210 ****\n> {\n> \tCloseHandle ( LHandle ) ;\n> }\n> ! LHandle = CreateSemaphore(NULL, 0, 0x7FFFFFFF, LBuff) ;\n> if( LHandle == NULL )\n> {\n> \tprintf( \"Creation de Semaphore \\\"Sem\\\" impossible\\n\" ) ;\n> --- 204,210 ----\n> {\n> \tCloseHandle ( LHandle ) ;\n> }\n> ! LHandle = CreateSemaphore(NULL, 0, 1, LBuff) ;\n> if( LHandle == NULL )\n> {\n> \tprintf( \"Creation de Semaphore \\\"Sem\\\" impossible\\n\" ) ;\n> ***************\n> *** 374,388 ****\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\tCYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t }\n> ! \t\t ReleaseSemaphore(LHandle, sop->sem_op, &LVal) ;\n> ! \t \t shareadrsem->current_nb[id].current_nb[sop->sem_num] +=\n> ! \t\t\t\t\tsop->sem_op ;\n> \t\t sem_deconnect() ;\n> \t\t} else {\n> \t\t if( sop->sem_flg == IPC_NOWAIT )\n> \t\t {\n> ! \t\t\tLRet = WaitForSingleObject(LHandle, 0) ;\n> ! \t\t\tif( LRet == WAIT_TIMEOUT )\n> \t\t\t{\n> debug_printf(\"do_semop : return -EAGAIN\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EAGAIN) ;\n> --- 374,387 ----\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\tCYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t }\n> ! \t shareadrsem->current_nb[id].current_nb[sop->sem_num] +=\n> ! \t\t\t\tsop->sem_op ;\n> \t\t sem_deconnect() ;\n> + \t\t ReleaseSemaphore(LHandle, 1 , &LVal) ;\n> \t\t} else {\n> \t\t if( sop->sem_flg == IPC_NOWAIT )\n> \t\t {\n> ! \t\t\tif( sop->sem_op + shareadrsem->current_nb[id].current_nb[sop->sem_num] <0 )\n> \t\t\t{\n> debug_printf(\"do_semop : return -EAGAIN\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EAGAIN) ;\n> ***************\n> *** 392,407 ****\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t\t}\n> ! \t \t\tshareadrsem->current_nb[id].current_nb[sop->sem_num] -= 1 ;\n> \t\t\tsem_deconnect() ;\n> \t\t } else {\n> ! \t\t\tLRet = WaitForSingleObject(LHandle, INFINITE) ;\n> \t\t\tif (sem_connect() == 0)\n> \t\t\t{\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t\t}\n> ! \t\t\t shareadrsem->current_nb[id].current_nb[sop->sem_num] -= 1 ;\n> \t\t\t sem_deconnect() ;\n> \t\t }\n> \t\t}\n> --- 391,408 ----\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t\t}\n> ! \t \t\tshareadrsem->current_nb[id].current_nb[sop->sem_num] += sop->sem_op;\n> \t\t\tsem_deconnect() ;\n> \t\t } else {\n> ! \t\t while(sop->sem_op + shareadrsem->current_nb[id].current_nb[sop->sem_num] <0)\n> ! \t\t\t\tLRet = WaitForSingleObject(LHandle, INFINITE) ;\n> ! \t\t \n> \t\t\tif (sem_connect() == 0)\n> \t\t\t{\n> debug_printf(\"do_semop : return -EACCES\\n\");\n> \t\t\t CYGWIN32_IPCNT_RETURN (-EACCES) ;\n> \t\t\t}\n> ! \t\t\t shareadrsem->current_nb[id].current_nb[sop->sem_num] += sop->sem_op ;\n> \t\t\t sem_deconnect() ;\n> \t\t }\n> \t\t}\n> ***************\n> *** 452,458 ****\n> \tchar LBuff[100] ;\n> \tHANDLE LHandle ;\n> \tlong LPrevious ;\n> - \tint LIndex;\n> \n> debug_printf(\"semctl : semid=%X semnum=%X cmd=0x%02X arg=%p\\n\",semid,semnum,cmd,arg);\n> \tif (semid < 0 || semnum < 0 || cmd < 0)\n> --- 453,458 ----\n> ***************\n> *** 585,606 ****\n> \t\tif( LHandle != NULL )\n> \t\t{\n> \t\t if( arg.val > shareadrsem->current_nb[id].current_nb[semnum] )\n> ! \t\t {\n> ! \t\t\tReleaseSemaphore(LHandle,\n> ! \t\t\targ.val-shareadrsem->current_nb[id].current_nb[semnum],\n> ! \t\t\t&LPrevious) ;\n> ! \t\t }\n> ! \t\t else if (arg.val <\n> ! \t\t shareadrsem->current_nb[id].current_nb[semnum] )\n> ! \t\t {\n> ! \t\t\tfor( LIndex = arg.val;\n> ! \t\t\tLIndex < shareadrsem->current_nb[id].current_nb[semnum];\n> ! \t\t\tLIndex++ )\n> ! \t\t\t{\n> ! \t\t\t WaitForSingleObject(LHandle, 0) ;\n> ! \t\t\t}\n> ! \t\t }\n> ! \t shareadrsem->current_nb[id].current_nb[semnum] = arg.val ;\n> \t\t}\n> debug_printf(\"semctl : SETVAL : return 0\\n\");\n> \t\tCYGWIN32_IPCNT_RETURN_DECONNECT (0);\n> --- 585,592 ----\n> \t\tif( LHandle != NULL )\n> \t\t{\n> \t\t if( arg.val > shareadrsem->current_nb[id].current_nb[semnum] )\n> ! \t\t\t\tReleaseSemaphore(LHandle,1,&LPrevious) ;\n> ! shareadrsem->current_nb[id].current_nb[semnum] = arg.val ;\n> \t\t}\n> debug_printf(\"semctl : SETVAL : return 0\\n\");\n> \t\tCYGWIN32_IPCNT_RETURN_DECONNECT (0);\n> \n> \n> \n> --\n> Yutaka tanida / S34 Co., Ltd.\n> [email protected] (Office)\n> [email protected](Private, or if you *HATE* Microsoft Outlook)\n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 16:49:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: IPC on win32 - additions for 6.5.2 and current\n trees"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, September 28, 1999 5:49 AM\n> To: yutaka tanida\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] Re: IPC on win32 - additions for 6.5.2 and\n> current trees\n> \n> \n> NT folks, I assume this patch is no longer needed.\n>\n\nNo,his patch is needed.\nI have already committed a new patch which combines Yutaka's and\nmy patch to current tree under src/win32.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 28 Sep 1999 08:53:31 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: IPC on win32 - additions for 6.5.2 and current\n trees"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Tuesday, September 28, 1999 5:49 AM\n> > To: yutaka tanida\n> > Cc: [email protected]; [email protected]\n> > Subject: Re: [HACKERS] Re: IPC on win32 - additions for 6.5.2 and\n> > current trees\n> > \n> > \n> > NT folks, I assume this patch is no longer needed.\n> >\n> \n> No,his patch is needed.\n> I have already committed a new patch which combines Yutaka's and\n> my patch to current tree under src/win32.\n\nOK, patch removed from README, and user pointed to new file in win32\ndirectory.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 21:42:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: IPC on win32 - additions for 6.5.2 and current\n trees"
}
] |
[
{
"msg_contents": "Just a month ago, we posted a bug-report on the BUGS mailing list\nconcerning the optimizer plan enumeration in the 6.5.0 release.\nWe haven't seen any comment about it and the problem is still unfixed in\nthe last release, so we wish to have a feedback from someone of the\ndevelopers.\nWe also posted a patch in the PATCHES mailing list.\nThe details of the bug can be found in the posted report still available\nin the archives of the mailing list, here we just briefly redescribe the\nproblem:\nthe problem is in the pruning algorithm (functions add_pathlist,\nbetter_path in optimizer/util/pathnode.c): it appens sometimes (see the\nreport) that in the path list of a RelOptInfo are kept more than one\npath with the same order insted of only the best one. This is not\ndangerous for the correctness of the algorithm, but it badly affects the\nperformance since the growth (exponential in the join number) in the\nenumeration space.\nBest regards\n\nRoberto Cornacchia ([email protected])\nAndrea Ghidini ([email protected])\n",
"msg_date": "Tue, 31 Aug 1999 14:05:15 +0200",
"msg_from": "Roberto Cornacchia <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizer pruning problem"
},
{
"msg_contents": "Roberto Cornacchia <[email protected]> writes:\n> Just a month ago, we posted a bug-report on the BUGS mailing list\n> concerning the optimizer plan enumeration in the 6.5.0 release.\n> We haven't seen any comment about it and the problem is still unfixed in\n> the last release, so we wish to have a feedback from someone of the\n> developers.\n\nI believe I have taken care of this problem as part of the optimizer\noverhaul I am doing for 6.6. I was not planning to back-patch any of\nthis work for 6.5.2, however --- too many changes, not yet enough\ntesting. If you pull down a current snapshot you should be able to\nsee what I have done with add_pathlist.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 09:36:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer pruning problem "
}
] |
[
{
"msg_contents": "Leon took it out with a patch that he sent in about ten days ago. I did\nsome (very) basic testing, and it seemed to remove the problem of limiting\nthe token size, which is what I was after.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Tuesday, August 31, 1999 3:58 PM\n>> To: Thomas Lockhart\n>> Cc: Brook Milligan; [email protected]; [email protected];\n>> [email protected]\n>> Subject: Re: [HACKERS] Postgres' lexer \n>> \n>> \n>> Thomas Lockhart <[email protected]> writes:\n>> > I added the <xm> exclusive state to accomodate the possibility of a\n>> > unary minus. The change was provoked by Vadim's addition of CREATE\n>> > SEQUENCE, which should allow negative numbers for some \n>> arguments. But\n>> > this just uncovered the tip of the general problem...\n>> \n>> It seems awfully hard and dangerous to try to identify unary minus in\n>> the lexer. The grammar at least has enough knowledge to \n>> recognize that\n>> a minus *is* unary and not binary. Looking into gram.y, I \n>> find that the\n>> CREATE SEQUENCE productions handle collapsing unary minus all by\n>> themselves! So in that particular case, there is still no \n>> need for the\n>> lexer to do it. AFAICT in a quick look through gram.y, there are no\n>> places where unary minus is recognized that gram.y won't try \n>> to collapse\n>> it.\n>> \n>> In short, I still think that the whole mess ought to come out of the\n>> lexer...\n>> \n>> \t\t\tregards, tom lane\n>> \n",
"msg_date": "Tue, 31 Aug 1999 16:04:06 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer "
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Leon took it out with a patch that he sent in about ten days ago. I did\n> some (very) basic testing, and it seemed to remove the problem of limiting\n> the token size, which is what I was after.\n\nHi Mike,\n I committed most of your long-query changes last night, along with\nsome work of my own, but ran out of steam before getting to psql.c.\nAlso I did not touch gram.y and scan.l because I was unsure that I\nhad the latest version of what you and Leon had done. Could you send\nme the latest and greatest?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 10:14:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer "
}
] |
[
{
"msg_contents": "I'll send it to you first thing in the morning. I'm afraid I'm not\nconnected at home. I've got a good scan.l, although I haven't touched\ngram.y.\n\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> Sent: Tuesday, August 31, 1999 4:15 PM\n>> To: Ansley, Michael\n>> Cc: Thomas Lockhart; Brook Milligan; [email protected];\n>> [email protected]\n>> Subject: Re: [HACKERS] Postgres' lexer \n>> \n>> \n>> \"Ansley, Michael\" <[email protected]> writes:\n>> > Leon took it out with a patch that he sent in about ten \n>> days ago. I did\n>> > some (very) basic testing, and it seemed to remove the \n>> problem of limiting\n>> > the token size, which is what I was after.\n>> \n>> Hi Mike,\n>> I committed most of your long-query changes last night, along with\n>> some work of my own, but ran out of steam before getting to psql.c.\n>> Also I did not touch gram.y and scan.l because I was unsure that I\n>> had the latest version of what you and Leon had done. Could you send\n>> me the latest and greatest?\n>> \n>> \t\t\tregards, tom lane\n>> \n",
"msg_date": "Tue, 31 Aug 1999 16:17:54 +0200",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer "
}
] |
[
{
"msg_contents": "Hi, \n\nI just compiled and installed Postgres 6.5.1 on Slackware 4.0 (kernel 2.2.6,\ncpu: PIII@450) whithout changing anything on the code. The regression tests had\nsome strange failures, so i'm sending you the .out and .diff files.\n\nThanks for your efforts on Postgres,\n\nT.\n\n-- \nTheodore=J.=Soldatos=_\\_=\"There=is=always=a=bug=somewhere\",=said==HAL=to=the==\n= [email protected] =_/_==Ultimate=Programmer,=and=turned=off=the=air=supply.=\n= [email protected] =_\\_=\"Everybody=knows=the=war=is=over,====================\n==== Scientific =====_/_==everybody=knows=the=good=guys=lost\"===Leonard=Cohen=\n= Publications Ltd. =_\\_============ http://w4u.eexi.gr/~theodore ============\n==== Finger: [email protected] or @aurora.eexi.gr ====",
"msg_date": "Tue, 31 Aug 1999 17:37:06 +0300",
"msg_from": "\"Theodore J. Soldatos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 6.5.1 on Slackware 4.0"
},
{
"msg_contents": "There is a problem with libpq++.so on IRIX 6.5 (MIPSpro 7.2.1, -n32). \nIf you compile and link a simple program like the following:\n\n#include \"libpq++/pgdatabase.h\"\n \nmain(int argc, char **argv)\n{\n PgDatabase *db = new PgDatabase(\"dbname=mydb\");\n db->Exec(\"SELECT * FROM TEST\");\n db->PrintTuples();\n delete db;\n}\n\nand run it, you'll get the following error:\n\n46407:./a.out: rld: Error: unresolvable symbol in\n/5xxRoot/ALG/pgsql/lib/libpq++.so.3.0:\n__node_allocator_lock__Q2_3std45__default_alloc_template__pt__13_XCbL11XCiL10\n46407:./a.out: rld: Error: unresolvable symbol in\n/5xxRoot/ALG/pgsql/lib/libpq++.so.3.0:\nfree_list__Q2_3std45__default_alloc_template__pt__13_XCbL11XCiL10\n46407:./a.out: rld: Fatal Error: this executable has unresolvable symbols\n\nI think this has to do with some quirks of the SGI MIPSpro compiler when\ncreating libraries for C++. For shared library, instead of \"ld -shared\",\n\"CC -shared\" should be used to enable pre-linking (for template\ninstantiation). And for static library, \"CC -ar\" should be used instead\nof \"ar\" (although right now if I use the static library the run-time error\ndoes not occur).\n\nI'm not sure whether using \"CC\" to create libraries for the pure C\nmodules would work (my guess is it should and I'll try it). If it does\nthen it'll be easy to patch the IRIX makefile template. But then the\ndownside is people will be required to have the C++ compiler even if\nthey don't care about libpq++.\n\n--Yu Cao\n\n\n\n\n",
"msg_date": "Fri, 3 Sep 1999 22:41:49 -0700 (PDT)",
"msg_from": "Yu Cao <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5"
},
{
"msg_contents": "Yu Cao <[email protected]> writes:\n> I'm not sure whether using \"CC\" to create libraries for the pure C\n> modules would work (my guess is it should and I'll try it). If it does\n> then it'll be easy to patch the IRIX makefile template. But then the\n> downside is people will be required to have the C++ compiler even if\n> they don't care about libpq++.\n\nNot necessarily. Since 6.4 or so, the \"template\" files are not just\nstatic assignments of variable values --- they can actually contain\narbitrary fragments of shell script (see configure.in's code that\nreads them). So you could do something like\n\tif [ -x /usr/bin/CC ]\n\tthen\n\t\tCC= CC\n\telse\n\t\tCC= cc\n\tfi\nin the IRIX template.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 11:18:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5 "
},
{
"msg_contents": "Hi Tom, thanks for the tip. I ended up just adding a few lines to\nMakefile.shlib. Attached is the context diff. The patch has been\ntested on IRIX 6.5.2 with MIPSpro C and C++ compiler version 7.2.1\nusing -n32 ABI.\n\n--Yu Cao\n\nOn Sat, 4 Sep 1999, Tom Lane wrote:\n\n> Yu Cao <[email protected]> writes:\n> > I'm not sure whether using \"CC\" to create libraries for the pure C\n> > modules would work (my guess is it should and I'll try it). If it does\n> > then it'll be easy to patch the IRIX makefile template. But then the\n> > downside is people will be required to have the C++ compiler even if\n> > they don't care about libpq++.\n> \n> Not necessarily. Since 6.4 or so, the \"template\" files are not just\n> static assignments of variable values --- they can actually contain\n> arbitrary fragments of shell script (see configure.in's code that\n> reads them). So you could do something like\n> \tif [ -x /usr/bin/CC ]\n> \tthen\n> \t\tCC= CC\n> \telse\n> \t\tCC= cc\n> \tfi\n> in the IRIX template.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n> \n>",
"msg_date": "Sun, 5 Sep 1999 23:55:56 -0700 (PDT)",
"msg_from": "Yu Cao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5"
},
{
"msg_contents": "Yu Cao <[email protected]> writes:\n> Hi Tom, thanks for the tip. I ended up just adding a few lines to\n> Makefile.shlib. Attached is the context diff. The patch has been\n> tested on IRIX 6.5.2 with MIPSpro C and C++ compiler version 7.2.1\n> using -n32 ABI.\n\nThat fix bothers me, because it would interfere with someone trying\nto use gcc/g++, wouldn't it? Seems safer to just alter configure's\ndefault...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 1999 11:05:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5 "
},
{
"msg_contents": "That's a good point (about interfering with gcc/g++). But I'm still\na bit hesitant about changing the default AR and LD for all other\nlibraries (although in theory it shouldn't do any harm). And if we\nchanged the default, gcc/g++ users (who happen to have CC installed\non their system) would again have to find a way to override it. So\nattached is another try: I put the mods in interfaces/libpq++/Makefile.in.\nThat file already had some checks on PORTNAME (for windows) and CXX (for\ng++), so it seems adding more checks (for irix5 and CC) doesn't make it\nuglier and also gets the job done with minimal impact on other things.\n\n--Yu Cao\n\nOn Mon, 6 Sep 1999, Tom Lane wrote:\n\n> Yu Cao <[email protected]> writes:\n> > Hi Tom, thanks for the tip. I ended up just adding a few lines to\n> > Makefile.shlib. Attached is the context diff. The patch has been\n> > tested on IRIX 6.5.2 with MIPSpro C and C++ compiler version 7.2.1\n> > using -n32 ABI.\n> \n> That fix bothers me, because it would interfere with someone trying\n> to use gcc/g++, wouldn't it? Seems safer to just alter configure's\n> default...",
"msg_date": "Mon, 6 Sep 1999 16:39:27 -0700 (PDT)",
"msg_from": "Yu Cao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5 "
},
{
"msg_contents": "Yu Cao <[email protected]> writes:\n> attached is another try: I put the mods in interfaces/libpq++/Makefile.in.\n> That file already had some checks on PORTNAME (for windows) and CXX (for\n> g++), so it seems adding more checks (for irix5 and CC) doesn't make it\n> uglier and also gets the job done with minimal impact on other things.\n\nThat looks good to me; will commit it. Thanks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 1999 09:18:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL 6.5.1: libpq++ libraries on IRIX 6.5 "
}
] |
[
{
"msg_contents": "Hi,\n\nthere is a bug in my array contrib. The varchar and bpchar function don't\nwork correctly. The following patch (for 6.5.1) fixes the problem.\n\n*** contrib/array/array_iterator.c.orig\tSat Jun 5 21:09:35 1999\n--- contrib/array/array_iterator.c\tTue Aug 31 11:22:44 1999\n***************\n*** 6,14 ****\n * elements of the array and the value and compute a result as\n * the logical OR or AND of the iteration results.\n *\n! * Copyright (c) 1997, Massimo Dal Zotto <[email protected]>\n * ported to postgreSQL 6.3.2,added oid_functions, 18.1.1999,\n * Tobias Gabele <[email protected]>\n */\n \n #include <ctype.h>\n--- 6,17 ----\n * elements of the array and the value and compute a result as\n * the logical OR or AND of the iteration results.\n *\n! * Copyright (C) 1999, Massimo Dal Zotto <[email protected]>\n * ported to postgreSQL 6.3.2,added oid_functions, 18.1.1999,\n * Tobias Gabele <[email protected]>\n+ *\n+ * This software is distributed under the GNU General Public License\n+ * either version 2, or (at your option) any later version.\n */\n \n #include <ctype.h>\n***************\n*** 180,186 ****\n int32\n array_varchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n--- 183,189 ----\n int32\n array_varchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1043,\t/* varchar */\n \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 189,195 ****\n int32\n array_all_varchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n--- 192,198 ----\n int32\n array_all_varchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1043,\t/* varchar */\n \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 198,204 ****\n int32\n array_varcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n--- 201,207 ----\n int32\n array_varcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1043,\t/* varchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 207,213 ****\n int32\n array_all_varcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n--- 210,216 ----\n int32\n array_all_varcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1043,\t/* varchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 221,227 ****\n int32\n array_bpchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n--- 224,230 ----\n int32\n array_bpchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 230,236 ****\n int32\n array_all_bpchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n--- 233,239 ----\n int32\n array_all_bpchareq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 239,245 ****\n int32\n array_bpcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n--- 242,248 ----\n int32\n array_bpcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 0,\t\t\t/* logical or */\n \t\t\t\t\t\t array, (Datum) value);\n***************\n*** 248,254 ****\n int32\n array_all_bpcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n--- 251,257 ----\n int32\n array_all_bpcharregexeq(ArrayType *array, char *value)\n {\n! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n \t\t\t\t\t\t 1,\t\t\t/* logical and */\n \t\t\t\t\t\t array, (Datum) value);\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n",
"msg_date": "Tue, 31 Aug 1999 17:01:43 +0200 (MEST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug in array contrib"
},
{
"msg_contents": "This shows as already applied.\n\n\n> Hi,\n> \n> there is a bug in my array contrib. The varchar and bpchar function don't\n> work correctly. The following patch (for 6.5.1) fixes the problem.\n> \n> *** contrib/array/array_iterator.c.orig\tSat Jun 5 21:09:35 1999\n> --- contrib/array/array_iterator.c\tTue Aug 31 11:22:44 1999\n> ***************\n> *** 6,14 ****\n> * elements of the array and the value and compute a result as\n> * the logical OR or AND of the iteration results.\n> *\n> ! * Copyright (c) 1997, Massimo Dal Zotto <[email protected]>\n> * ported to postgreSQL 6.3.2,added oid_functions, 18.1.1999,\n> * Tobias Gabele <[email protected]>\n> */\n> \n> #include <ctype.h>\n> --- 6,17 ----\n> * elements of the array and the value and compute a result as\n> * the logical OR or AND of the iteration results.\n> *\n> ! * Copyright (C) 1999, Massimo Dal Zotto <[email protected]>\n> * ported to postgreSQL 6.3.2,added oid_functions, 18.1.1999,\n> * Tobias Gabele <[email protected]>\n> + *\n> + * This software is distributed under the GNU General Public License\n> + * either version 2, or (at your option) any later version.\n> */\n> \n> #include <ctype.h>\n> ***************\n> *** 180,186 ****\n> int32\n> array_varchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 183,189 ----\n> int32\n> array_varchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1043,\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 189,195 ****\n> int32\n> array_all_varchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 192,198 ----\n> int32\n> array_all_varchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1043,\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1070,\t/* varchareq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 198,204 ****\n> int32\n> array_varcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 201,207 ----\n> int32\n> array_varcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1043,\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 207,213 ****\n> int32\n> array_all_varcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 210,216 ----\n> int32\n> array_all_varcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1043,\t/* varchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 221,227 ****\n> int32\n> array_bpchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 224,230 ----\n> int32\n> array_bpchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 230,236 ****\n> int32\n> array_all_bpchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 233,239 ----\n> int32\n> array_all_bpchareq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1048,\t/* bpchareq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 239,245 ****\n> int32\n> array_bpcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 242,248 ----\n> int32\n> array_bpcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 0,\t\t\t/* logical or */\n> \t\t\t\t\t\t array, (Datum) value);\n> ***************\n> *** 248,254 ****\n> int32\n> array_all_bpcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 20,\t\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> --- 251,257 ----\n> int32\n> array_all_bpcharregexeq(ArrayType *array, char *value)\n> {\n> ! \treturn array_iterator((Oid) 1042,\t/* bpchar */\n> \t\t\t\t\t\t (Oid) 1254,\t/* textregexeq */\n> \t\t\t\t\t\t 1,\t\t\t/* logical and */\n> \t\t\t\t\t\t array, (Datum) value);\n> \n> -- \n> Massimo Dal Zotto\n> \n> +----------------------------------------------------------------------+\n> | Massimo Dal Zotto email: [email protected] |\n> | Via Marconi, 141 phone: ++39-0461534251 |\n> | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n> | Italy pgp: finger [email protected] |\n> +----------------------------------------------------------------------+\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 16:39:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug in array contrib"
}
] |
[
{
"msg_contents": "> I think we ought to hold up 6.5.2 long enough to cram this patch in, but\n> I'm hesitant to stick it in the stable branch without some more testing.\n> Cyrus, can you try it and see if it fixes your problem?\n\nOk, I can't actually try the patch for another week or so, since my\ndevelopment machine has temporarily become a production machine, but thanks to\nHiroshi Inoue's patch I was able to figure out how to demonstrate the problem\nin an easily reproducable manner that anyone can test.\n\nAs you can see, a connection open through a vacuum does end up duplicating\nits open file descriptors. Here's a psql session demonstrating the problem:\n\ncr@photox% psql -d template1\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i386-unknown-freebsd3.2, compiled by cc ]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=> select * from pg_user;\nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 70|t |t |t |t |********|Sat Jan 31 01:00:00 2037 EST\ncr | 71|t |t |t |t |********| \npaxis | 72|f |t |t |t |********| \n(3 rows)\n\ntemplate1=> \nSuspended\n\ncr@photox% ps ax|grep postgres\n 425 ?? Ss 2:37.25 /usr/local/pgsql/bin/postmaster -i -S -o -F (postgres\n90608 ?? S 0:00.06 /usr/local/pgsql/bin/postgres cr localhost template1 \n\ncr@photox% fstat -p 90608\nUSER CMD PID FD MOUNT INUM MODE SZ|DV R/W\npgsql postgres 90608 root / 2 drwxr-xr-x 512 r\npgsql postgres 90608 wd /usr 366233 drwx------ 1536 r\npgsql postgres 90608 text /usr 334856 -r-xr-xr-x 1050936 r\npgsql postgres 90608 0 / 967 crw-rw-rw- null rw\npgsql postgres 90608 1 / 967 crw-rw-rw- null rw\npgsql postgres 90608 2 / 967 crw-rw-rw- null rw\npgsql postgres 90608 3 /usr 365266 -rw------- 1712 r\npgsql postgres 90608 4 /usr 366283 -rw------- 262144 rw\npgsql postgres 90608 5* local stream ca3f3b80 <-> ca3f3cc0\npgsql postgres 90608 6 /usr 366236 -rw------- 8192 rw\npgsql postgres 90608 7 /usr 366239 -rw------- 8192 rw\npgsql postgres 90608 8 /usr 366269 -rw------- 16384 rw\npgsql postgres 90608 9 /usr 366238 -rw------- 49152 rw\npgsql postgres 90608 10 /usr 366259 -rw------- 32768 rw\npgsql postgres 90608 11 /usr 366281 -rw------- 8192 rw\npgsql postgres 90608 12 /usr 366235 -rw------- 172032 rw\npgsql postgres 90608 13 /usr 366246 -rw------- 8192 rw\npgsql postgres 90608 14 /usr 366242 -rw------- 8192 rw\npgsql postgres 90608 15 /usr 366249 -rw------- 8192 rw\npgsql postgres 90608 16 /usr 366247 -rw------- 16384 rw\npgsql postgres 90608 17 /usr 366244 -rw------- 65536 rw\npgsql postgres 90608 18 /usr 366262 -rw------- 139264 rw\npgsql postgres 90608 19 /usr 366237 -rw------- 16384 rw\npgsql postgres 90608 20 /usr 366265 -rw------- 16384 rw\npgsql postgres 90608 21 /usr 366261 -rw------- 40960 rw\npgsql postgres 90608 22 /usr 366254 -rw------- 24576 rw\npgsql postgres 90608 23 /usr 366292 -rw------- 0 rw\npgsql postgres 90608 24 /usr 366258 -rw------- 65536 rw\npgsql postgres 90608 25 /usr 366267 -rw------- 16384 rw\n\ncr@photox% psql -d template1 -c vacuum\nVACUUM\n\ncr@photox% !fstat\nfstat -p 90608\nUSER CMD PID FD MOUNT INUM MODE SZ|DV R/W\npgsql postgres 90608 root / 2 drwxr-xr-x 512 r\npgsql postgres 90608 wd /usr 366233 drwx------ 1536 r\npgsql postgres 90608 text /usr 334856 -r-xr-xr-x 1050936 r\npgsql postgres 90608 0 / 967 crw-rw-rw- null rw\npgsql postgres 90608 1 / 967 crw-rw-rw- null rw\npgsql postgres 90608 2 / 967 crw-rw-rw- null rw\npgsql postgres 90608 3 /usr 365266 -rw------- 1712 r\npgsql postgres 90608 4 /usr 366283 -rw------- 262144 rw\npgsql postgres 90608 5* local stream ca3f3b80 <-> ca3f3cc0\npgsql postgres 90608 6 /usr 366236 -rw------- 8192 rw\npgsql postgres 90608 7 /usr 366239 -rw------- 8192 rw\npgsql postgres 90608 8 /usr 366269 -rw------- 16384 rw\npgsql postgres 90608 9 /usr 366238 -rw------- 49152 rw\npgsql postgres 90608 10 /usr 366259 -rw------- 32768 rw\npgsql postgres 90608 11 /usr 366281 -rw------- 8192 rw\npgsql postgres 90608 12 /usr 366235 -rw------- 172032 rw\npgsql postgres 90608 13 /usr 366246 -rw------- 8192 rw\npgsql postgres 90608 14 /usr 366242 -rw------- 8192 rw\npgsql postgres 90608 15 /usr 366249 -rw------- 8192 rw\npgsql postgres 90608 16 /usr 366247 -rw------- 16384 rw\npgsql postgres 90608 17 /usr 366244 -rw------- 65536 rw\npgsql postgres 90608 18 /usr 366262 -rw------- 139264 rw\npgsql postgres 90608 19 /usr 366237 -rw------- 16384 rw\npgsql postgres 90608 20 /usr 366265 -rw------- 16384 rw\npgsql postgres 90608 21 /usr 366261 -rw------- 40960 rw\npgsql postgres 90608 22 /usr 366254 -rw------- 24576 rw\npgsql postgres 90608 23 /usr 366292 -rw------- 0 rw\npgsql postgres 90608 24 /usr 366258 -rw------- 65536 rw\npgsql postgres 90608 25 /usr 366267 -rw------- 16384 rw\n\ncr@photox% fg\npsql -d template1\nselect * from pg_user;\nusename|usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd |valuntil \n-------+--------+-----------+--------+--------+---------+--------+----------------------------\npgsql | 70|t |t |t |t |********|Sat Jan 31 01:00:00 2037 EST\ncr | 71|t |t |t |t |********| \npaxis | 72|f |t |t |t |********| \n(3 rows)\n\ntemplate1=> \nSuspended\n\ncr@photox% !fstat\nfstat -p 90608\nUSER CMD PID FD MOUNT INUM MODE SZ|DV R/W\npgsql postgres 90608 root / 2 drwxr-xr-x 512 r\npgsql postgres 90608 wd /usr 366233 drwx------ 1536 r\npgsql postgres 90608 text /usr 334856 -r-xr-xr-x 1050936 r\npgsql postgres 90608 0 / 967 crw-rw-rw- null rw\npgsql postgres 90608 1 / 967 crw-rw-rw- null rw\npgsql postgres 90608 2 / 967 crw-rw-rw- null rw\npgsql postgres 90608 3 /usr 365266 -rw------- 1712 r\npgsql postgres 90608 4 /usr 366283 -rw------- 262144 rw\npgsql postgres 90608 5* local stream ca3f3b80 <-> ca3f3cc0\npgsql postgres 90608 6 /usr 366236 -rw------- 8192 rw\npgsql postgres 90608 7 /usr 366239 -rw------- 8192 rw\npgsql postgres 90608 8 /usr 366269 -rw------- 16384 rw\npgsql postgres 90608 9 /usr 366238 -rw------- 49152 rw\npgsql postgres 90608 10 /usr 366259 -rw------- 32768 rw\npgsql postgres 90608 11 /usr 366281 -rw------- 8192 rw\npgsql postgres 90608 12 /usr 366235 -rw------- 172032 rw\npgsql postgres 90608 13 /usr 366246 -rw------- 8192 rw\npgsql postgres 90608 14 /usr 366242 -rw------- 8192 rw\npgsql postgres 90608 15 /usr 366249 -rw------- 8192 rw\npgsql postgres 90608 16 /usr 366247 -rw------- 16384 rw\npgsql postgres 90608 17 /usr 366244 -rw------- 65536 rw\npgsql postgres 90608 18 /usr 366262 -rw------- 139264 rw\npgsql postgres 90608 19 /usr 366237 -rw------- 16384 rw\npgsql postgres 90608 20 /usr 366265 -rw------- 16384 rw\npgsql postgres 90608 21 /usr 366261 -rw------- 40960 rw\npgsql postgres 90608 22 /usr 366254 -rw------- 24576 rw\npgsql postgres 90608 23 /usr 366292 -rw------- 0 rw\npgsql postgres 90608 24 /usr 366258 -rw------- 65536 rw\npgsql postgres 90608 25 /usr 366267 -rw------- 16384 rw\npgsql postgres 90608 26 /usr 366254 -rw------- 24576 rw\npgsql postgres 90608 27 /usr 366246 -rw------- 8192 rw\npgsql postgres 90608 28 /usr 366242 -rw------- 8192 rw\npgsql postgres 90608 29 /usr 366249 -rw------- 8192 rw\npgsql postgres 90608 30 /usr 366247 -rw------- 16384 rw\npgsql postgres 90608 31 /usr 366244 -rw------- 65536 rw\npgsql postgres 90608 32 /usr 366265 -rw------- 16384 rw\npgsql postgres 90608 33 /usr 366292 -rw------- 0 rw\npgsql postgres 90608 34 /usr 366258 -rw------- 65536 rw\npgsql postgres 90608 35 /usr 366281 -rw------- 8192 rw\n\ncr@photox% ls -iR /usr/local/pgsql/data/ | egrep '292|258|281'\n366281 pg_shadow\n366258 pg_attribute_relid_attnam_index\n366292 pg_user\n",
"msg_date": "Tue, 31 Aug 1999 11:12:54 -0400 (EDT)",
"msg_from": "Cyrus Rahman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
},
{
"msg_contents": "Cyrus Rahman <[email protected]> writes:\n> As you can see, a connection open through a vacuum does end up duplicating\n> its open file descriptors.\n\nIndeed, phrased in that fashion it's easy to duplicate the problem.\n\nInterestingly, this isn't a big problem on platforms where there is\na relatively low limit on number of open files per process. A backend\nwill run its open file count up to the limit and then stay there\n(wasting a few more virtual-file-descriptor array slots per vacuum\ncycle, but this is such a small memory leak you'd likely never notice).\nBut on systems that let a process have thousands of kernel file\ndescriptors, there will be no recycling of kernel descriptors as the\nnumber of virtual descriptors increases.\n\nWhat's the consensus, hackers? Do we risk sticking Hiroshi's patch into\n6.5.2, or not? It should definitely go into current, but I'm worried\nabout putting it into the stable branch right before a release...\nVadim, does it look right to you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 11:49:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Interestingly, this isn't a big problem on platforms where there is\n ^^^^^^^^^^^^^^^^^^^^^^^^\n> a relatively low limit on number of open files per process. A backend\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> will run its open file count up to the limit and then stay there\n> (wasting a few more virtual-file-descriptor array slots per vacuum\n> cycle, but this is such a small memory leak you'd likely never notice).\n> But on systems that let a process have thousands of kernel file\n> descriptors, there will be no recycling of kernel descriptors as the\n> number of virtual descriptors increases.\n> \n> What's the consensus, hackers? Do we risk sticking Hiroshi's patch into\n> 6.5.2, or not? It should definitely go into current, but I'm worried\n> about putting it into the stable branch right before a release...\n> Vadim, does it look right to you?\n\nSorry, I have no time to look in it. But there is another solution:\n\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Monday, June 07, 1999 7:49 PM\n> To: Hiroshi Inoue\n> Cc: The Hermit Hacker; [email protected]\n> Subject: Re: [HACKERS] postgresql-v6.5beta2.tar.gz ...\n>\n\n[snip] \n\n> 2. fd.c:pg_nofile()->sysconf(_SC_OPEN_MAX) returns in FreeBSD \n> near total number of files that can be opened in system\n> (by _all_ users/procs). With total number of opened files\n> ~ 2000 I can run your test with 10-20 simultaneous\n> xactions for very short time, -:)\n> \n> Should we limit fd.c:no_files to ~ 256?\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> This is port-specific, of course...\n\nNo risk at all...\n\nVadim\n",
"msg_date": "Wed, 01 Sep 1999 00:18:27 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage?"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Wednesday, September 01, 1999 1:18 AM\n> To: Tom Lane\n> Cc: Cyrus Rahman; [email protected]; [email protected]\n> Subject: Re: [HACKERS] File descriptor leakage?\n>\n> \n> Tom Lane wrote:\n> > \n> > Interestingly, this isn't a big problem on platforms where there is\n> ^^^^^^^^^^^^^^^^^^^^^^^^\n> > a relatively low limit on number of open files per process. A backend\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > will run its open file count up to the limit and then stay there\n\nIt's not a small problem on platforms such as cygwin, OS2 where we\ncouldn't unlink open files. We have to close useless file descriptors\nASAP there.\n\n6.5.2-release should be stable as possible.\nSo I don't object to the riskless way as Vadim mentioned. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 1 Sep 1999 10:36:42 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] File descriptor leakage?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Tom Lane wrote:\n>>>> Interestingly, this isn't a big problem on platforms where there is\n>>>> a relatively low limit on number of open files per process.\n\n> It's not a small problem on platforms such as cygwin, OS2 where we\n> couldn't unlink open files.\n\nAh, right, good ol' microsoft strikes again...\n\n> We have to close useless file descriptors ASAP there.\n> 6.5.2-release should be stable as possible.\n> So I don't object to the riskless way as Vadim mentioned. \n\nWell, Vadim's \"riskless solution\" does NOT solve the problem you mention\nabove, AFAICT. Reducing the number of kernel file descriptors won't\nmagically cause forgotten descriptors for a table you want to delete\nto not be there --- it just shortens the interval where you'll have a\nproblem, by shortening the interval before the descriptors get recycled.\nIf you reduce the number of descriptors enough to make the problem\nunlikely to occur, you'll be taking a big performance hit.\n\nSo we need a proper fix to ensure the relation code doesn't forget about\nopen descriptors.\n\nI will try to take a look at Hiroshi's patch this evening, and will\ncommit it to both branches if I can't find anything wrong with it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 09:41:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] File descriptor leakage? "
}
] |
[
{
"msg_contents": "Anyone run into this error before:\n\nERROR: Unable to locate type oid 718 in catalog\n\nThis occurred when I tried to 'vacuum verbose analyze' my database. The\nlast time it occurred, I had to re-build the database to get rid of the\nerror message. Perhaps the pg_catalog is getting corrupted somehow? I'm\nnot quite sure what it means. The vacuum doesn't finish but rather craps\nout after the error. There is no core being generated.\n\nI'm using Postgres 6.5.1 on RH Linux 6.0 (i686).\n\nThanks.\n-Tony\n\n",
"msg_date": "Tue, 31 Aug 1999 10:01:38 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: Unable to locate type oid 718 in catalog"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Anyone run into this error before:\n> ERROR: Unable to locate type oid 718 in catalog\n> This occurred when I tried to 'vacuum verbose analyze' my database.\n\nA quick glimpse says that the only occurrences of that error text are\nin parse_type.c, which is not code that I'd think would get called from\nvacuum. Odd.\n\nIf you do \"select * from pg_type where oid = 718;\" you should get\n\ntypname|typowner|typlen|typprtlen|typbyval|typtype|typisdefined|typdelim|typrelid|typelem|typinput |typoutput |typreceive|typsend |typalign|typdefault\n-------+--------+------+---------+--------+-------+------------+--------+--------+-------+---------+----------+----------+----------+--------+----------\ncircle | 256| 24| 47|f |b |t |, | 0| 0|circle_in|circle_out|circle_in |circle_out|d |\n(1 row)\n\nIf you don't then indeed pg_type is corrupted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 14:08:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog "
},
{
"msg_contents": "Tom Lane wrote:\n\n> A quick glimpse says that the only occurrences of that error text are\n> in parse_type.c, which is not code that I'd think would get called from\n> vacuum. Odd.\n>\n> If you do \"select * from pg_type where oid = 718;\" you should get\n>\n> typname|typowner|typlen|typprtlen|typbyval|typtype|typisdefined|typdelim|typrelid|typelem|typinput |typoutput |typreceive|typsend |typalign|typdefault\n> -------+--------+------+---------+--------+-------+------------+--------+--------+-------+---------+----------+----------+----------+--------+----------\n> circle | 256| 24| 47|f |b |t |, | 0| 0|circle_in|circle_out|circle_in |circle_out|d |\n> (1 row)\n>\n> If you don't then indeed pg_type is corrupted.\n>\n> regards, tom lane\n\nOkay, I found out why I am getting this error. My partner is building a table which he is calling 'circle'. Of course, circle is a pg_type in the\nPostgreSQL. So he DROP TYPE'd circle from the database (we don't need that type anyway). For some reason, the database seems to not mind this until I do\nthe vacuum analyze.\n\nAny suggestions on a workaround? We'd really prefer to use 'circle' as a tablename and don't need it as a pg_type.\n-Tony\n\n\n",
"msg_date": "Tue, 31 Aug 1999 12:10:52 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Okay, I found out why I am getting this error. My partner is building\n> a table which he is calling 'circle'. Of course, circle is a pg_type\n> in the PostgreSQL. So he DROP TYPE'd circle from the database (we\n> don't need that type anyway). For some reason, the database seems to\n> not mind this until I do the vacuum analyze.\n\nIs it possible that you've got tables lying around that have ordinary-\ncircle-type fields in them? Vacuum analyze would notice the lack of\ntype data, but I'm not sure a plain vacuum would.\n\nIn any case, it'd be wise to flush everything in pg_operator and pg_proc\nthat has circle as an argument or result type. (Does DROP TYPE do that\nfor you? I bet not...) There might be other system tables that have\nreferences to circle, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 15:51:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Is it possible that you've got tables lying around that have ordinary-\n> circle-type fields in them?\n\nNo. No tables at all use the type circle.\n\n> In any case, it'd be wise to flush everything in pg_operator and pg_proc\n> that has circle as an argument or result type. (Does DROP TYPE do that\n> for you? I bet not...) There might be other system tables that have\n> references to circle, too.\n\n>\n>\n\nI'm not sure what you mean by flush pg_operator and pg_proc. What would the\ncommand be?\n\nThanks.\n-Tony\n\n\n",
"msg_date": "Tue, 31 Aug 1999 13:20:35 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Tom Lane wrote:\n>> In any case, it'd be wise to flush everything in pg_operator and pg_proc\n>> that has circle as an argument or result type. (Does DROP TYPE do that\n>> for you? I bet not...) There might be other system tables that have\n>> references to circle, too.\n\n> I'm not sure what you mean by flush pg_operator and pg_proc. What would the\n> command be?\n\nI meant drop all the operators and functions that use circle data.\n\nYou could run the oidjoins regression test script to find out which ones\nthose are ... it should complain about all system table entries that\nrefer to OID 718. (If you are mistaken that you have no tables using\ncircles, you'd find that out, too.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 1999 17:46:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog "
},
{
"msg_contents": "Tom Lane wrote:\n\n> I meant drop all the operators and functions that use circle data.\n>\n> You could run the oidjoins regression test script to find out which ones\n> those are ... it should complain about all system table entries that\n> refer to OID 718. (If you are mistaken that you have no tables using\n> circles, you'd find that out, too.)\n>\n> regards, tom lane\n\nSo I think you are saying that although none of my tables have the circle type,\nthere are inherent Postgres functions and\noperators which use circle. By running the regression test, I could find out\nwhich functions and operators these are and just drop them. Is the vacuum\ncrapping out then because it is trying to vacuum one of these functions and\nfinding that OID 718 doesn't exist?\n\nThanks.\n-Tony\n\n\n",
"msg_date": "Tue, 31 Aug 1999 15:13:34 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog"
},
{
"msg_contents": "> > I meant drop all the operators and functions that use circle data.\n> So I think you are saying that although none of my tables have the circle type,\n> there are inherent Postgres functions and\n> operators which use circle. By running the regression test, I could find out\n> which functions and operators these are and just drop them. Is the vacuum\n> crapping out then because it is trying to vacuum one of these functions and\n> finding that OID 718 doesn't exist?\n\nAren't the built-in types cached at compile time? Even if not, I'd\n*really* suggest using a different name for your table. Even \"Circle\"\n(including the double-quotes and mixed case) would work, and would\nkeep you from having to drop built-in types.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 01 Sep 1999 01:51:02 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> So I think you are saying that although none of my tables have the\n> circle type, there are inherent Postgres functions and operators which\n> use circle. By running the regression test, I could find out which\n> functions and operators these are and just drop them.\n\nRight.\n\n> Is the vacuum\n> crapping out then because it is trying to vacuum one of these\n> functions and finding that OID 718 doesn't exist?\n\nVacuum doesn't vacuum functions (AFAIK). It does, however, use the type\ninformation about columns of tables that it's vacuuming --- at least it\ndoes in vacuum analyze mode, not sure about plain vacuum. That's why\nI'm suspicious that you have somewhere a forgotten table that has a\ncolumn of circle type...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 09:23:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Vacuum doesn't vacuum functions (AFAIK). It does, however, use the type\n> information about columns of tables that it's vacuuming --- at least it\n> does in vacuum analyze mode, not sure about plain vacuum. That's why\n> I'm suspicious that you have somewhere a forgotten table that has a\n> column of circle type...\n\nNope. I'm absolutely, positively, 100% sure that no table uses the type\n'circle'. However, we're going to name the table 'circles' instead and\nre-build the database by dumping, destroying, re-creating, and dumping back\nin. IT just makes the most sense.\n\nThanks for the help.\n-Tony\n\n\n\n",
"msg_date": "Wed, 01 Sep 1999 09:55:56 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ERROR: Unable to locate type oid 718 in catalog"
}
] |
[
{
"msg_contents": "subscribe\n\n\n",
"msg_date": "Tue, 31 Aug 1999 20:30:00 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hi All,\n\nCall CVS update I'm getting:-\n\nmtcc:[/export/home/emkxp01/pgsql](39)% cvs update\nFatal error, aborting.\n: no such user\n\n\"truss\"ing the progess I can see:-\n\nsend(3, \" B E G I N A U T H R\".., 19, 0) = 19\nsend(3, \" / u s r / l o c a l / c\".., 18, 0) = 18\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, \" a n o n c v s\", 7, 0) = 7\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, \" A : 0 Z , I d Z\", 9, 0) = 9\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, \" E N D A U T H R E Q\".., 17, 0) = 17\nrecv(3, \" I\", 1, 0) = 1\nrecv(3, \" \", 1, 0) = 1\nrecv(3, \" L\", 1, 0) = 1\nrecv(3, \" O\", 1, 0) = 1\nrecv(3, \" V\", 1, 0) = 1\nrecv(3, \" E\", 1, 0) = 1\nrecv(3, \" \", 1, 0) = 1\nrecv(3, \" Y\", 1, 0) = 1\nrecv(3, \" O\", 1, 0) = 1\nrecv(3, \" U\", 1, 0) = 1\nrecv(3, \"\\n\", 1, 0) = 1\nfcntl(3, F_SETFD, 0x00000001) = 0\nfcntl(3, F_SETFD, 0x00000001) = 0\nfcntl(3, F_SETFD, 0x00000001) = 0\ndup(3) = 5\n\nKeith.\n\n",
"msg_date": "Tue, 31 Aug 1999 22:51:35 +0100 (BST)",
"msg_from": "Keith Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS Broken?"
},
{
"msg_contents": "\nDamn, I wish FreeBSD would up their CVS ... fixed now...\n\nOn Tue, 31 Aug 1999, Keith Parks wrote:\n\n> Hi All,\n> \n> Call CVS update I'm getting:-\n> \n> mtcc:[/export/home/emkxp01/pgsql](39)% cvs update\n> Fatal error, aborting.\n> : no such user\n> \n> \"truss\"ing the progess I can see:-\n> \n> send(3, \" B E G I N A U T H R\".., 19, 0) = 19\n> send(3, \" / u s r / l o c a l / c\".., 18, 0) = 18\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, \" a n o n c v s\", 7, 0) = 7\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, \" A : 0 Z , I d Z\", 9, 0) = 9\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, \" E N D A U T H R E Q\".., 17, 0) = 17\n> recv(3, \" I\", 1, 0) = 1\n> recv(3, \" \", 1, 0) = 1\n> recv(3, \" L\", 1, 0) = 1\n> recv(3, \" O\", 1, 0) = 1\n> recv(3, \" V\", 1, 0) = 1\n> recv(3, \" E\", 1, 0) = 1\n> recv(3, \" \", 1, 0) = 1\n> recv(3, \" Y\", 1, 0) = 1\n> recv(3, \" O\", 1, 0) = 1\n> recv(3, \" U\", 1, 0) = 1\n> recv(3, \"\\n\", 1, 0) = 1\n> fcntl(3, F_SETFD, 0x00000001) = 0\n> fcntl(3, F_SETFD, 0x00000001) = 0\n> fcntl(3, F_SETFD, 0x00000001) = 0\n> dup(3) = 5\n> \n> Keith.\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 31 Aug 1999 19:57:11 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CVS Broken?"
}
] |
[
{
"msg_contents": "Okay, I finally convinced my partner that making a table named 'circle'\nand dropping the type 'circle' to compensate is just a bad idea. We're\ngoing to rename the table 'circles' and restore the 'circle' type. Could\nyou give me the psql command line to restore the circle type?\n\nThanks.\n-Tony\n\n\n",
"msg_date": "Tue, 31 Aug 1999 15:26:05 -0700",
"msg_from": "\"G. Anthony Reina\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "OID 718 and Circle"
},
{
"msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Okay, I finally convinced my partner that making a table named 'circle'\n> and dropping the type 'circle' to compensate is just a bad idea. We're\n> going to rename the table 'circles' and restore the 'circle' type. Could\n> you give me the psql command line to restore the circle type?\n\nI think you gotta rebuild the database --- if you just do a new CREATE\nTYPE for circle, it won't have the right OID...\n\nYou might be able to do a COPY WITH OIDS out of template1's pg_type,\nedit it down to just the line for OID 718, and then COPY WITH OIDS\nto your own database's pg_type. Not sure if that will work though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 09:26:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] OID 718 and Circle "
}
] |
[
{
"msg_contents": "Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\nfind it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\nstill didn't applied.\nThis patch prevents index file to grow indefinitely and I consider it\nas a bug fix. It's not complete, index file still grow but with much \nless factor.\n\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 1 Sep 1999 10:09:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changes for 6.5.2 ?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> still didn't applied.\n\nI think it fell through the cracks --- IIRC, Vadim didn't have REL6_5\ninstalled locally so he asked for someone else to apply that patch\nto the stable branch. Bruce usually handles that kind of thing but\nhe's been gone for the last few days. I'll see about sticking it in\nthis evening, unless someone objects or beats me to it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 09:58:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ? "
},
{
"msg_contents": "On Wed, 1 Sep 1999, Tom Lane wrote:\n\n> Date: Wed, 01 Sep 1999 09:58:33 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Changes for 6.5.2 ? \n> \n> Oleg Bartunov <[email protected]> writes:\n> > Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> > find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> > still didn't applied.\n> \n> I think it fell through the cracks --- IIRC, Vadim didn't have REL6_5\n> installed locally so he asked for someone else to apply that patch\n> to the stable branch. Bruce usually handles that kind of thing but\n> he's been gone for the last few days. I'll see about sticking it in\n> this evening, unless someone objects or beats me to it.\n\nOK, Understand, I just worried about not to forget this patch :-)\nI've tested this patch under quite intensive updates and it seems\nworks fine - I have cron task to vacuum updated table. index file\ntruncated but still grow.\n\n\n\tOleg\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 1 Sep 1999 18:13:15 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ? "
},
{
"msg_contents": "\nErk...point me at it or send it to me and I can get at it now even...\n\nOn Wed, 1 Sep 1999, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> > find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> > still didn't applied.\n> \n> I think it fell through the cracks --- IIRC, Vadim didn't have REL6_5\n> installed locally so he asked for someone else to apply that patch\n> to the stable branch. Bruce usually handles that kind of thing but\n> he's been gone for the last few days. I'll see about sticking it in\n> this evening, unless someone objects or beats me to it.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 11:41:56 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ? "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Erk...point me at it or send it to me and I can get at it now even...\n\nI don't have a copy of the patch handy either --- look at Vadim's\npostings in pghackers for the last few weeks.\n\nActually it looks like you could just commit the current nbtinsert.c\ninto REL6_5, but I haven't tried it... not sure if the patch included\nchanges in any other file...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 10:46:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ? "
},
{
"msg_contents": "> Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> still didn't applied.\n> This patch prevents index file to grow indefinitely and I consider it\n> as a bug fix. It's not complete, index file still grow but with much \n> less factor.\n> \n> \n\nI just got back from vacation, and will start on this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 20:04:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ?"
},
{
"msg_contents": "\nAlready done :)\n\n\nOn Wed, 1 Sep 1999, Bruce Momjian wrote:\n\n> > Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> > find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> > still didn't applied.\n> > This patch prevents index file to grow indefinitely and I consider it\n> > as a bug fix. It's not complete, index file still grow but with much \n> > less factor.\n> > \n> > \n> \n> I just got back from vacation, and will start on this.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 21:23:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ?"
},
{
"msg_contents": "> \n> Already done :)\n> \n> \n> On Wed, 1 Sep 1999, Bruce Momjian wrote:\n> \n> > > Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> > > find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> > > still didn't applied.\n\nI still need to do the HISTORY/Changes list, and mark all the files as 6.5.2.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 20:23:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ?"
},
{
"msg_contents": "On Wed, 1 Sep 1999, Bruce Momjian wrote:\n\n> > \n> > Already done :)\n> > \n> > \n> > On Wed, 1 Sep 1999, Bruce Momjian wrote:\n> > \n> > > > Is there Changes list for 6.5.2 ? I checked REL6_5 tree and didn't\n> > > > find it. It seems that Vadim's patch for nbtinsert.c (row-reuse)\n> > > > still didn't applied.\n> \n> I still need to do the HISTORY/Changes list, and mark all the files as 6.5.2.\n\nOops...*hangs head* *waddles away*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 22:39:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Changes for 6.5.2 ?"
}
] |
[
{
"msg_contents": "Hi,\n\nwhen trying to compile the \"current\" sources under Solaris 2.5.1, \nthe yacc-generated C source in interfaces/ecpg/preproc/preproc.c \nseems to be faulty:\n\ngcc -I../../../include -I../../../backend -I/usr/local/tcltk-8.0p2/include -Wall -Wmissing-prototypes -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=2 -DINCLUDE_PATH=\\\"/usr/local/pgsql-develop/include\\\" -c preproc.c \n/usr/ccs/bin/yaccpar: In function `yyparse':\n/usr/ccs/bin/yaccpar:275: warning: implicit declaration of function `yylex'\npreproc.y:3822: parse error before `}'\n/usr/ccs/bin/yaccpar:375: warning: label `yyerrlab' defined but not used\n/usr/ccs/bin/yaccpar:165: warning: label `yynewstate' defined but not used\ngmake[3]: *** [preproc.o] Error 1\ngmake[3]: Leaving directory `/share/syswork2/sw/PostgreSQL/CURRENT/pgsql/src/interfaces/ecpg/preproc'\n\nRunning preproc.y through Bison (under Linux) helped, but this is\nof course no adequate solution.\n\nBest regards\nRainer Klute\n\n Dipl.-Inform. Tel.: +49 211 9330260\n Rainer Klute Fax: +49 211 9330293\n NADS GmbH\n Hildebrandtstr. 4e <http://www.nads.de/>\nD-40215 D�sseldorf <http://www.pixelboxx.de/>\n",
"msg_date": "Wed, 01 Sep 1999 09:35:32 +0200",
"msg_from": "Rainer Klute <[email protected]>",
"msg_from_op": true,
"msg_subject": "Yacc output faulty (\"current\")"
},
{
"msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> when trying to compile the \"current\" sources under Solaris 2.5.1, \n> the yacc-generated C source in interfaces/ecpg/preproc/preproc.c \n> seems to be faulty:\n> \n> gcc -I../../../include -I../../../backend -I/usr/local/tcltk-8.0p2/include -Wall -Wmissing-prototypes -I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=6 -DPATCHLEVEL=2 -DINCLUDE_PATH=\\\"/usr/local/pgsql-develop/include\\\" -c preproc.c \n> /usr/ccs/bin/yaccpar: In function `yyparse':\n> /usr/ccs/bin/yaccpar:275: warning: implicit declaration of function `yylex'\n> preproc.y:3822: parse error before `}'\n> /usr/ccs/bin/yaccpar:375: warning: label `yyerrlab' defined but not used\n> /usr/ccs/bin/yaccpar:165: warning: label `yynewstate' defined but not used\n> gmake[3]: *** [preproc.o] Error 1\n> gmake[3]: Leaving directory `/share/syswork2/sw/PostgreSQL/CURRENT/pgsql/src/interfaces/ecpg/preproc'\n> \n> Running preproc.y through Bison (under Linux) helped, but this is\n> of course no adequate solution.\n> \n\nYes, we know. Will work in next release.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Sep 1999 20:04:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Yacc output faulty (\"current\")"
}
] |
[
{
"msg_contents": ">> There is the possibility for ambiguity. But it is our responsibility\n>> to minimize that ambiguity and to make a predictable system, in the\n>> presence of Postgres' unique and valuable features such as type\n>> extension. imho this is more important than, for example, allowing\n>> infinite-length queries.\nI agree that predictability is more important than the limit on the query\nlength, but I think that they can coexist. I'm not aware of what the unary\nminus recognition in the scanner is about, and why it's important, but if it\nis important, then perhaps we can look at implementing it in such a way that\nno vltc is created. This should be possible. After conversation with Vern\n(Paxson, author of flex), it appears that we can, under normal conditions,\nuse a start condition to allow the same token to be identified. This\nremoves the vltc, which in turn, means that we don't limit the length of the\ntoken. Also, vltcs are major performance degraders.\n\nBTW, Thomas, it's not the query length that is limited by this unary minus\nissue, but the token length. The reason I see this as important is because\nit means that, once row size is independent of block size, people will try\nto insert large text fields. A large text field is a single token.\nSo, at the moment, it's not really an issue, but I was hoping to get it out\nthe way before the row size issue was tackled, so that when that was\ncomplete, everything just worked ;-)\n\n>> Sorry, what is the performance penalty for that feature, and \n>> how do we measure that against breakage of expected, predictable\nbehavior?\n>> Please quantify.\n>> \n>> So far, I'm not a fan of the proposed change; we're giving up behavior\n>> that impacts Postgres' unique type extension features for an\n>> arbitrarily large query buffer (as opposed to a generously large query\n>> buffer, which can be accomplished just by changing the fixed size).\nLike I say, I think we can do both (and remove the performance penalty of\nthe vltc), if we do it right. Thomas, can you send me enough info about the\nunary minus token (basic explanation, gotchas, anything else I should know),\nand I'll have a look at using a start condition to implement it.\n\nMikeA\n",
"msg_date": "Wed, 1 Sep 1999 10:18:14 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer"
}
] |
[
{
"msg_contents": "I think I found some bugs in SELECT...\nI have two tables MASTER1 and DETAIL1 both of them with only one field\nCODE\nof data type VARCHAR but MASTER1.CODE is 11 char long and DETAIL1.CODE\n16 char l\n\nhygea=> \\d master1\nTable = master1\n+----------------------------------+----------------------------------+-------+\n\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n\n| code | varchar()\n| 11 |\n+----------------------------------+----------------------------------+-------+\n\nhygea=> \\d detail1\nTable = detail1\n+----------------------------------+----------------------------------+-------+\n\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n\n| code | varchar()\n| 16 |\n+----------------------------------+----------------------------------+-------+\n\n--I have the following test data into these tables:\n\nhygea=> select * from master1;\ncode\n-----------\na\na1\na13\n(3 rows)\n\nhygea=> select * from detail1;\ncode\n----------------\na13\na13\na1\n(3 rows)\n\n--if I try to join these two tables I have the following (nothing):\n\nhygea=> select m.*, d.* from master1 m, detail1 d where m.code=d.code;\ncode|code\n----+----\n(0 rows)\n--and now trying with TRIM function... it works!\n\nhygea=> select m.*, d.* from master1 m, detail1 d where\ntrim(m.code)=trim(d.code\ncode |code\n-----------+----------------\na13 |a13\na13 |a13\na1 |a1\n(3 rows)\n\n--and last another variation using aliases: (note that I forgot to\nchange\n-- MASTER1 with m and DETAIL1 with d:\nhygea=> select master1.*, detail1.* from master1 m, detail1 d where\ntrim(m.code)\ncode |code\n-----------+----------------\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\n(27 rows)\n\nAny ideas?\n\nJos�\n\n\n\nI think I found some bugs in SELECT...\nI have two tables MASTER1 and DETAIL1 both of them with only one\nfield CODE\nof data type VARCHAR but MASTER1.CODE is 11 char long and DETAIL1.CODE\n16 char l\nhygea=> \\d master1\nTable = master1\n+----------------------------------+----------------------------------+-------+\n| \nField \n| \nType \n| Length|\n+----------------------------------+----------------------------------+-------+\n| code \n| varchar() \n| 11 |\n+----------------------------------+----------------------------------+-------+\nhygea=> \\d detail1\nTable = detail1\n+----------------------------------+----------------------------------+-------+\n| \nField \n| \nType \n| Length|\n+----------------------------------+----------------------------------+-------+\n| code \n| varchar() \n| 16 |\n+----------------------------------+----------------------------------+-------+\n--I have the following test data into these tables:\nhygea=> select * from master1;\ncode\n-----------\na\na1\na13\n(3 rows)\nhygea=> select * from detail1;\ncode\n----------------\na13\na13\na1\n(3 rows)\n--if I try to join these two tables I have the following (nothing):\nhygea=> select m.*, d.* from master1 m, detail1 d where m.code=d.code;\ncode|code\n----+----\n(0 rows)\n--and now trying with TRIM function... it works!\nhygea=> select m.*, d.* from master1 m, detail1 d where trim(m.code)=trim(d.code\ncode |code\n-----------+----------------\na13 |a13\na13 |a13\na1 |a1\n(3 rows)\n--and last another variation using aliases: (note that I forgot\nto change\n-- MASTER1 with m and DETAIL1 with d:\nhygea=> select master1.*, detail1.* from master1 m, detail1 d where\ntrim(m.code)\ncode |code\n-----------+----------------\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\na |a13\na1 |a13\na13 |a13\na |a13\na1 |a13\na13 |a13\na |a1\na1 |a1\na13 |a1\n(27 rows)\nAny ideas?\nJosé",
"msg_date": "Wed, 01 Sep 1999 12:44:58 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT BUG"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> --I have the following test data into these tables:\n\n> hygea=> select * from master1;\n> code\n> -----------\n> a\n> a1\n> a13\n> (3 rows)\n\n> hygea=> select * from detail1;\n> code\n> ----------------\n> a13\n> a13\n> a1\n> (3 rows)\n\n> --if I try to join these two tables I have the following (nothing):\n\n> hygea=> select m.*, d.* from master1 m, detail1 d where m.code=d.code;\n> code|code\n> ----+----\n> (0 rows)\n> --and now trying with TRIM function... it works!\n\n> hygea=> select m.*, d.* from master1 m, detail1 d where\n> trim(m.code)=trim(d.code\n> code |code\n> -----------+----------------\n> a13 |a13\n> a13 |a13\n> a1 |a1\n> (3 rows)\n\nLooks to me like you have differing numbers of trailing spaces in the\nentries in each table. If so, this is not a bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 09:30:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "You mean that \"a1 \" is not equal to \"a1 \" ?\nbut PostgreSQL has a different behavior in the following example:\n\nhygea=> select code,len(code) as len_of_code,code1, len(code1) as\nlen_of_code1\nfrom master1 where code = code1;\n\ncode |len_of_code|code1 |len_of_code1\n----------+-----------+------------+------------\na1 | 10|a1 | 15\n(1 row)\n\n\nin this case the test code = code1 is true even if these fields have\ndifferent number of trailling spaces.\n\nTherefore if the above test is OK there's a bug on:\n\n select m.*, d.* from master1 m, detail1 d where m.code=d.code;\n\n\nJos�\n\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > --I have the following test data into these tables:\n>\n> > hygea=> select * from master1;\n> > code\n> > -----------\n> > a\n> > a1\n> > a13\n> > (3 rows)\n>\n> > hygea=> select * from detail1;\n> > code\n> > ----------------\n> > a13\n> > a13\n> > a1\n> > (3 rows)\n>\n> > --if I try to join these two tables I have the following (nothing):\n>\n> > hygea=> select m.*, d.* from master1 m, detail1 d where m.code=d.code;\n> > code|code\n> > ----+----\n> > (0 rows)\n> > --and now trying with TRIM function... it works!\n>\n> > hygea=> select m.*, d.* from master1 m, detail1 d where\n> > trim(m.code)=trim(d.code\n> > code |code\n> > -----------+----------------\n> > a13 |a13\n> > a13 |a13\n> > a1 |a1\n> > (3 rows)\n>\n> Looks to me like you have differing numbers of trailing spaces in the\n> entries in each table. If so, this is not a bug.\n>\n> regards, tom lane\n\n\nYou mean that \"a1 \" is not equal to \"a1 \n\" ?\nbut PostgreSQL has a different behavior in the following example:\nhygea=> select code,len(code) as len_of_code,code1, len(code1) as\nlen_of_code1\nfrom master1 where code = code1;\ncode |len_of_code|code1 \n|len_of_code1\n----------+-----------+------------+------------\na1 | \n10|a1 | \n15\n(1 row)\n \nin this case the test code = code1 is true even if these fields\nhave\ndifferent number of trailling spaces.\nTherefore if the above test is OK there's a bug on:\n select m.*, d.* from\nmaster1 m, detail1 d where m.code=d.code;\n \nJosé\n \nTom Lane ha scritto:\n=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>\nwrites:\n> --I have the following test data into these tables:\n> hygea=> select * from master1;\n> code\n> -----------\n> a\n> a1\n> a13\n> (3 rows)\n> hygea=> select * from detail1;\n> code\n> ----------------\n> a13\n> a13\n> a1\n> (3 rows)\n> --if I try to join these two tables I have the following (nothing):\n> hygea=> select m.*, d.* from master1 m, detail1 d where m.code=d.code;\n> code|code\n> ----+----\n> (0 rows)\n> --and now trying with TRIM function... it works!\n> hygea=> select m.*, d.* from master1 m, detail1 d where\n> trim(m.code)=trim(d.code\n> code |code\n> -----------+----------------\n> a13 |a13\n> a13 |a13\n> a1 |a1\n> (3 rows)\nLooks to me like you have differing numbers of trailing spaces in\nthe\nentries in each table. If so, this is not a bug.\n \nregards, tom lane",
"msg_date": "Wed, 01 Sep 1999 18:22:27 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> You mean that \"a1 \" is not equal to \"a1 \" ?\n\nI don't think they're equal ... do you? That is what trim()\nis for, after all.\n\n> but PostgreSQL has a different behavior in the following example:\n> hygea=> select code,len(code) as len_of_code,code1, len(code1) as\n> len_of_code1\n> from master1 where code = code1;\n\nWhat is this \"len\" function? I don't find one in the standard\ndistribution. I suspect you have some locally developed function\nthat returns the attrmod of the column --- which is the maximum\nlength of a varchar, but is not the same as the *actual* length\nof the value.\n\n> in this case the test code = code1 is true even if these fields have\n> different number of trailling spaces.\n\nI see no such behavior:\n\nregression=> create table z2 (code varchar(10), code1 varchar(15));\nCREATE\nregression=> select code,len(code) from z2;\nERROR: No such function 'len' with the specified attributes\nregression=> insert into z2 values ('a1', 'a1');\nINSERT 282452 1\nregression=> insert into z2 values ('a1 ', 'a1 ');\nINSERT 282453 1\nregression=> select *,length(code),length(code1) from z2 ;\ncode|code1 |length|length\n----+---------+------+------\na1 |a1 | 2| 2\na1 |a1 | 4| 9\n(2 rows)\nregression=> select *,length(code),length(code1) from z2 where code = code1;\ncode|code1|length|length\n----+-----+------+------\na1 |a1 | 2| 2\n(1 row)\n\nCan you provide a reproducible example?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 09:19:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "> You mean that \"a1 \" is not equal to \"a1 \" ?\n> but PostgreSQL has a different behavior in the following example:\n\nYou will have to give more details on your schema and data entry for\nus to see the problem; things look good to me too. Examples below...\n\n - Thomas\n\npostgres=> create table t1 (v3 varchar(3), v5 varchar(5), c3 char(3),\nc5 char(5));\nCREATE\npostgres=> insert into t1 values ('a1 ', 'a1 ', 'a1', 'a1');\nINSERT 150220 1\npostgres=> select * from t1 where v3 = v5;\nv3|v5|c3|c5\n--+--+--+--\n(0 rows)\n\npostgres=> select * from t1 where c3 = c5;\nv3 |v5 |c3 |c5 \n---+-----+---+-----\na1 |a1 |a1 |a1 \n(1 row)\n\npostgres=> select * from t1 where trim(v3) = trim(v5);\nv3 |v5 |c3 |c5 \n---+-----+---+-----\na1 |a1 |a1 |a1 \n(1 row)\n\npostgres=> insert into t1 values ('a2', 'a2', 'a2', 'a2');\nINSERT 150221 1\npostgres=> select * from t1 where v3 = v5;\nv3|v5|c3 |c5 \n--+--+---+-----\na2|a2|a2 |a2 \n(1 row)\n\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Sep 1999 13:59:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "Sorry for the confusion:\n\nHere an example...\n\ncreate table master(mcode char(11), mcode1 char(16));\ncreate table detail(dcode char(16));\ninsert into master values ('a','a');\ninsert into master values ('a1','a1');\ninsert into master values ('a13','a13');\ninsert into detail values ('a13');\ninsert into detail values ('a1');\ninsert into detail values ('a13');\n\n--in the following example mcode is long 11 and mcode1 is long 16\n--but mcode=mcode1 is true:\n\nselect * from master where mcode=mcode1;\nmcode |mcode1\n-----------+----------------\na |a\na1 |a1\na13 |a13\n(3 rows)\n\n--in the following example mcode is long 11 and dcode1 is long 16\n--but mcode=dcode1 is false:\n\nselect mcode, dcode from master m, detail d where mcode=dcode;\nmcode|dcode\n-----+-----\n(0 rows)\n\n\nthe same example in informix-SE gives me this:\n----------------------------------------------\ncode code\n\na1 a1\na13 a13\n\n\nJos�\n\n\nSorry for the confusion:\nHere an example...\ncreate table master(mcode char(11), mcode1 char(16));\ncreate table detail(dcode char(16));\ninsert into master values ('a','a');\ninsert into master values ('a1','a1');\ninsert into master values ('a13','a13');\ninsert into detail values ('a13');\ninsert into detail values ('a1');\ninsert into detail values ('a13');\n--in the following example mcode is long 11 and mcode1 is long 16\n--but mcode=mcode1 is true:\nselect * from master where mcode=mcode1;\nmcode |mcode1\n-----------+----------------\na |a\na1 |a1\na13 |a13\n(3 rows)\n--in the following example mcode is long 11 and dcode1 is long 16\n--but mcode=dcode1 is false:\nselect mcode, dcode from master m, detail d where mcode=dcode;\nmcode|dcode\n-----+-----\n(0 rows)\n \nthe same example in informix-SE gives me this:\n----------------------------------------------\ncode code\na1 a1\na13 a13\n \nJosé",
"msg_date": "Thu, 02 Sep 1999 18:47:41 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> Here an example...\n> create table master(mcode char(11), mcode1 char(16));\n> create table detail(dcode char(16));\n> insert into master values ('a','a');\n> insert into master values ('a1','a1');\n> insert into master values ('a13','a13');\n> insert into detail values ('a13');\n> insert into detail values ('a1');\n> insert into detail values ('a13');\n\n> --in the following example mcode is long 11 and mcode1 is long 16\n> --but mcode=mcode1 is true:\n\n> select * from master where mcode=mcode1;\n> mcode |mcode1\n> -----------+----------------\n> a |a\n> a1 |a1\n> a13 |a13\n> (3 rows)\n\nOn looking at the bpchar (ie, fixed-length char) comparison functions,\nI see that they *do* strip trailing blanks before comparing. varchar\nand text do not do this --- they assume trailing blanks are real data.\n\nThis inconsistency bothers me: I've always thought that char(),\nvarchar(), and text() are functionally interchangeable, but it seems\nthat's not so. Is this behavior mandated by SQL92?\n\n> --in the following example mcode is long 11 and dcode1 is long 16\n> --but mcode=dcode1 is false:\n\n> select mcode, dcode from master m, detail d where mcode=dcode;\n> mcode|dcode\n> -----+-----\n> (0 rows)\n\nOh my, that's interesting. Executing your query with current sources\ngives me:\n\nregression=> select mcode, dcode from master m, detail d where mcode=dcode;\nmcode |dcode\n-----------+----------------\na1 |a1\na13 |a13\na13 |a13\n(3 rows)\n\nWhen I \"explain\" this, I see that I am getting a mergejoin plan.\nAre you getting a hash join, perhaps?\n\nbpchareq is marked hashjoinable in pg_operator, but if its behavior\nincludes blank-stripping then that is WRONG. Hashjoin is only safe\nfor operators that represent bitwise equality...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 13:33:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "> On looking at the bpchar (ie, fixed-length char) comparison functions,\n> I see that they *do* strip trailing blanks before comparing. varchar\n> and text do not do this --- they assume trailing blanks are real data.\n> This inconsistency bothers me: I've always thought that char(),\n> varchar(), and text() are functionally interchangeable, but it seems\n> that's not so. Is this behavior mandated by SQL92?\n\nI was pretty sure it is (though of course \"text\" isn't an SQL92 type).\nWhat I'm finding in Date and Darwen and my draft SQL92 document is\nthat whether the default character set uses SPACE PAD or NO PAD\ncollation attribute for a character set is implementation defined.\n\nI haven't found any explicit reference to a distinction between CHAR\nand VARCHAR in the docs nor a discussion of the SQL_TEXT character set\nwrt this topic. So apparently SQL_TEXT properties are implementation\ndefined too. But we should look into it more before deciding to change\nanything because afaik the current behavior has been the same in\nPostgres forever...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Sep 1999 02:18:41 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> This inconsistency bothers me: I've always thought that char(),\n>> varchar(), and text() are functionally interchangeable, but it seems\n>> that's not so. Is this behavior mandated by SQL92?\n\n> I haven't found any explicit reference to a distinction between CHAR\n> and VARCHAR in the docs nor a discussion of the SQL_TEXT character set\n> wrt this topic. So apparently SQL_TEXT properties are implementation\n> defined too. But we should look into it more before deciding to change\n> anything because afaik the current behavior has been the same in\n> Postgres forever...\n\nI'm not necessarily arguing for a change; if you're satisfied that\nthe existing comparison logic obeys the spec, it's OK with me.\n(Ignoring trailing blanks in bpchar does seem reasonable when you\nthink about it.)\n\nBut if it is correct, then we need to turn off oprcanhash for bpchareq.\nOdd that no one has noticed this before.\n\nSome doc updates might be in order too...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 01:59:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > Here an example...\n> > create table master(mcode char(11), mcode1 char(16));\n> > create table detail(dcode char(16));\n> > insert into master values ('a','a');\n> > insert into master values ('a1','a1');\n> > insert into master values ('a13','a13');\n> > insert into detail values ('a13');\n> > insert into detail values ('a1');\n> > insert into detail values ('a13');\n>\n> > --in the following example mcode is long 11 and mcode1 is long 16\n> > --but mcode=mcode1 is true:\n>\n> > select * from master where mcode=mcode1;\n> > mcode |mcode1\n> > -----------+----------------\n> > a |a\n> > a1 |a1\n> > a13 |a13\n> > (3 rows)\n>\n> On looking at the bpchar (ie, fixed-length char) comparison functions,\n> I see that they *do* strip trailing blanks before comparing. varchar\n> and text do not do this --- they assume trailing blanks are real data.\n>\n> This inconsistency bothers me: I've always thought that char(),\n> varchar(), and text() are functionally interchangeable, but it seems\n> that's not so. Is this behavior mandated by SQL92?\n>\n> > --in the following example mcode is long 11 and dcode1 is long 16\n> > --but mcode=dcode1 is false:\n>\n> > select mcode, dcode from master m, detail d where mcode=dcode;\n> > mcode|dcode\n> > -----+-----\n> > (0 rows)\n>\n> Oh my, that's interesting. Executing your query with current sources\n> gives me:\n>\n> regression=> select mcode, dcode from master m, detail d where mcode=dcode;\n> mcode |dcode\n> -----------+----------------\n> a1 |a1\n> a13 |a13\n> a13 |a13\n> (3 rows)\n>\n> When I \"explain\" this, I see that I am getting a mergejoin plan.\n> Are you getting a hash join, perhaps?\n\nYes.\n\n> prova=> explain select mcode, dcode from master m, detail d where\n> mcode=dcode;\n> NOTICE: QUERY PLAN:\n>\n> Hash Join (cost=156.00 rows=1001 width=24)\n> -> Seq Scan on detail d (cost=43.00 rows=1000 width=12)\n> -> Hash (cost=43.00 rows=1000 width=12)\n> -> Seq Scan on master m (cost=43.00 rows=1000 width=12)\n>\n> EXPLAIN\n>\n\nJos�\n\n>\n\n>\n> bpchareq is marked hashjoinable in pg_operator, but if its behavior\n> includes blank-stripping then that is WRONG. Hashjoin is only safe\n> for operators that represent bitwise equality...\n>\n> regards, tom lane\n\n",
"msg_date": "Fri, 03 Sep 1999 12:57:00 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "And now the other SELECT bug in the same data:\n\nselect master1.*, detail1.*\nfrom master1 m, detail1 d\nwhere trim(m.code)=trim(d.code);\n\n(I know there's an error in this syntax, but I don't know why PostgreSQL\nfinds it good and execute a strange query)\n\ncode |code1 |code\n-----------+----------------+----------------\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\na |a |a1\na1 |a1 |a1\na13 |a13 |a1\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\na |a |a1\na1 |a1 |a1\na13 |a13 |a1\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\na |a |a1\na1 |a1 |a1\na13 |a13 |a1\na |a |a13\na1 |a1 |a13\na13 |a13 |a13\n(27 rows)\n\nAny idea ?\n\nJos�\n\n\n\nAnd now the other SELECT bug in the same data:\nselect master1.*, detail1.*\nfrom master1 m, detail1 d\nwhere trim(m.code)=trim(d.code);\n(I know there's an error in this syntax, but I don't know why PostgreSQL\nfinds it good and execute a strange query)\ncode |code1 \n|code\n-----------+----------------+----------------\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\na |a \n|a1\na1 |a1 \n|a1\na13 |a13 \n|a1\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\na |a \n|a1\na1 |a1 \n|a1\na13 |a13 \n|a1\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\na |a \n|a1\na1 |a1 \n|a1\na13 |a13 \n|a1\na |a \n|a13\na1 |a1 \n|a13\na13 |a13 \n|a13\n(27 rows)\nAny idea ?\nJosé",
"msg_date": "Fri, 03 Sep 1999 14:28:54 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n>> When I \"explain\" this, I see that I am getting a mergejoin plan.\n>> Are you getting a hash join, perhaps?\n\n> Yes.\n\n> prova=> explain select mcode, dcode from master m, detail d where\n> mcode=dcode;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=156.00 rows=1001 width=24)\n> -> Seq Scan on detail d (cost=43.00 rows=1000 width=12)\n> -> Hash (cost=43.00 rows=1000 width=12)\n> -> Seq Scan on master m (cost=43.00 rows=1000 width=12)\n> \n> EXPLAIN\n\nOK, do this:\n\nupdate pg_operator set oprcanhash = 'f' where oid = 1054;\n\nand I think you'll be OK. I will put that change into the sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 11:04:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> And now the other SELECT bug in the same data:\n> select master1.*, detail1.*\n> from master1 m, detail1 d\n> where trim(m.code)=trim(d.code);\n\nThis one is definitely pilot error. Since you've renamed master1 and\ndetail1 in the FROM clause, your use of the original names in the SELECT\nlist is treated as adding more FROM items. Effectively your query is\n\nselect m2.*, d2.*\nfrom master1 m, detail1 d, master1 m2, detail1 d2\nwhere trim(m.code)=trim(d.code);\n\nYou're getting a four-way join with only one restriction clause...\n\nThere was a thread just the other day about whether we ought to allow\nqueries like this, because of someone else making exactly the same\nerror. I believe allowing tables to be referenced without FROM entries\nis a holdover from the old Postquel language that's not found in SQL92.\nMaybe we should get rid of it on the grounds that it creates confusion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 11:13:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > And now the other SELECT bug in the same data:\n> > select master1.*, detail1.*\n> > from master1 m, detail1 d\n> > where trim(m.code)=trim(d.code);\n>\n> This one is definitely pilot error. Since you've renamed master1 and\n> detail1 in the FROM clause, your use of the original names in the SELECT\n> list is treated as adding more FROM items. Effectively your query is\n>\n> select m2.*, d2.*\n> from master1 m, detail1 d, master1 m2, detail1 d2\n> where trim(m.code)=trim(d.code);\n>\n> You're getting a four-way join with only one restriction clause...\n>\n> There was a thread just the other day about whether we ought to allow\n> queries like this, because of someone else making exactly the same\n> error. I believe allowing tables to be referenced without FROM entries\n> is a holdover from the old Postquel language that's not found in SQL92.\n> Maybe we should get rid of it on the grounds that it creates confusion.\n>\n> regards, tom lane\n>\n>\n\nPostgreSQL should raise a syntax error like Informix and Oracle do.\n\n> ************\n> INFORMIX:\n>\n> select master1.*, detail1.* from master1 m, detail1 d where mcode=dcode;\n> # ^\n> # 522: Table (master1) not selected in query.\n> #\n> ------------------------------------------------------------------------\n> ORACLE:\n>\n> select master1.*, detail1.* from master1 m, detail1 d where mcode=dcode\n> *\n> ERROR at line1:\n> ORA-00942: table or view does not exist\n>\n>\n\nJos�\n\n",
"msg_date": "Tue, 07 Sep 1999 12:37:17 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "Tom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> >> When I \"explain\" this, I see that I am getting a mergejoin plan.\n> >> Are you getting a hash join, perhaps?\n>\n> > Yes.\n>\n> > prova=> explain select mcode, dcode from master m, detail d where\n> > mcode=dcode;\n> > NOTICE: QUERY PLAN:\n> >\n> > Hash Join (cost=156.00 rows=1001 width=24)\n> > -> Seq Scan on detail d (cost=43.00 rows=1000 width=12)\n> > -> Hash (cost=43.00 rows=1000 width=12)\n> > -> Seq Scan on master m (cost=43.00 rows=1000 width=12)\n> >\n> > EXPLAIN\n>\n> OK, do this:\n>\n> update pg_operator set oprcanhash = 'f' where oid = 1054;\n>\n> and I think you'll be OK. I will put that change into the sources.\n>\n> regards, tom lane\n>\n> ************\n\nYes, Tom, now it works, but...\nInformix gives me a different result. Who is right ?\n\n\nprova=> select mcode, dcode from master m, detail d where mcode=dcode;\nmcode|dcode\n-----+-----\n(0 rows)\n\nprova=> update pg_operator set oprcanhash = 'f' where oid = 1054;\nUPDATE 1\nprova=> select mcode, dcode from master m, detail d where mcode=dcode;\nmcode |dcode\n-----------+----------------\na1 |a1\na13 |a13\na13 |a13\n(3 rows)\n\n\nINFORMIX:\nSQL: New Run Modify Use-editor Output Choose Save Info Drop\nExit\nRun the current SQL statements.\n----------------------- hygea@hygea ------------ Press CTRL-W for Help\n--------\nmcode dcode\na1 a1\na13 a13\n\n\nJos�\n\n\n\n\n \nTom Lane ha scritto:\n=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>\nwrites:\n>> When I \"explain\" this, I see that I am getting a mergejoin plan.\n>> Are you getting a hash join, perhaps?\n> Yes.\n> prova=> explain select mcode, dcode from master m, detail d\nwhere\n> mcode=dcode;\n> NOTICE: QUERY PLAN:\n>\n> Hash Join (cost=156.00 rows=1001 width=24)\n> -> Seq Scan on detail d (cost=43.00 rows=1000 width=12)\n> -> Hash (cost=43.00 rows=1000 width=12)\n> -> Seq Scan on master m \n(cost=43.00 rows=1000 width=12)\n>\n> EXPLAIN\nOK, do this:\nupdate pg_operator set oprcanhash = 'f' where oid = 1054;\nand I think you'll be OK. I will put that change into the sources.\n \nregards, tom lane\n************\nYes, Tom, now it works, but...\nInformix gives me a different result. Who is right ?\n \nprova=> select mcode, dcode from master m, detail d where\nmcode=dcode;\nmcode|dcode\n-----+-----\n(0 rows)\nprova=> update pg_operator set oprcanhash = 'f' where oid = 1054;\nUPDATE 1\nprova=> select mcode, dcode from master m, detail d where\nmcode=dcode;\nmcode |dcode\n-----------+----------------\na1 |a1\na13 |a13\na13 |a13\n(3 rows)\n \nINFORMIX:\nSQL: New Run Modify Use-editor \nOutput Choose Save Info Drop Exit\nRun the current SQL statements.\n----------------------- hygea@hygea ------------ Press CTRL-W for\nHelp --------\nmcode dcode\na1 a1\na13 a13\n \nJosé",
"msg_date": "Tue, 07 Sep 1999 12:59:24 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> Yes, Tom, now it works, but...\n> Informix gives me a different result. Who is right ?\n\nHard to tell, since I don't know what your data is.\n\n> prova=> update pg_operator set oprcanhash = 'f' where oid = 1054;\n> UPDATE 1\n> prova=> select mcode, dcode from master m, detail d where mcode=dcode;\n> mcode |dcode\n> -----------+----------------\n> a1 |a1\n> a13 |a13\n> a13 |a13\n> (3 rows)\n\n... but all three of those sure look equal to me ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 1999 09:35:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> \n> Tom Lane ha scritto:\n> \n> > =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > > And now the other SELECT bug in the same data:\n> > > select master1.*, detail1.*\n> > > from master1 m, detail1 d\n> > > where trim(m.code)=trim(d.code);\n> >\n> > This one is definitely pilot error. Since you've renamed master1 and\n> > detail1 in the FROM clause, your use of the original names in the SELECT\n> > list is treated as adding more FROM items. Effectively your query is\n> >\n> > select m2.*, d2.*\n> > from master1 m, detail1 d, master1 m2, detail1 d2\n> > where trim(m.code)=trim(d.code);\n> >\n> > You're getting a four-way join with only one restriction clause...\n> >\n> > There was a thread just the other day about whether we ought to allow\n> > queries like this, because of someone else making exactly the same\n> > error. I believe allowing tables to be referenced without FROM entries\n> > is a holdover from the old Postquel language that's not found in SQL92.\n> > Maybe we should get rid of it on the grounds that it creates confusion.\n> >\n> > regards, tom lane\n> >\n> >\n> \n> PostgreSQL should raise a syntax error like Informix and Oracle do.\n\n\nWe sould at least give them an elog(NOTICE) to say we are doing\nsomething special, no?\n\n\n> \n> > ************\n> > INFORMIX:\n> >\n> > select master1.*, detail1.* from master1 m, detail1 d where mcode=dcode;\n> > # ^\n> > # 522: Table (master1) not selected in query.\n> > #\n> > ------------------------------------------------------------------------\n> > ORACLE:\n> >\n> > select master1.*, detail1.* from master1 m, detail1 d where mcode=dcode\n> > *\n> > ERROR at line1:\n> > ORA-00942: table or view does not exist\n> >\n> >\n> \n> Jos_\n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Sep 1999 11:39:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SELECT BUG"
}
] |
[
{
"msg_contents": "Hi all,\n\nUnless it exists, I'd like to implement code for the array manipulations\nabove for Postgresql. and I need some help for it. I have already run\nthrough the sources concerned but I need to be pointed to the right\ndirection on where to start and what kind of functions I should use. I'd\nalso like to have a bit more detailed info on array structure (or at\nleast where I can find a good doc. I've been looking for it in\n'array.h', 'arrayutils.c' and 'arrayfuncs.c').\n\nThanks so much, in advance,\nPeter Blazso\n",
"msg_date": "Wed, 01 Sep 1999 13:48:53 +0200",
"msg_from": "Peter Blazso <[email protected]>",
"msg_from_op": true,
"msg_subject": "need help for array appending & deleting"
},
{
"msg_contents": "Peter Blazso <[email protected]> writes:\n> Unless it exists, I'd like to implement code for the array manipulations\n> above for Postgresql. and I need some help for it. I have already run\n> through the sources concerned but I need to be pointed to the right\n> direction on where to start and what kind of functions I should use. I'd\n> also like to have a bit more detailed info on array structure (or at\n> least where I can find a good doc. I've been looking for it in\n> 'array.h', 'arrayutils.c' and 'arrayfuncs.c').\n\nWhat's in the code is all there is :-(. Please consider improving the\ndocumentation once you have studied the code enough to understand what's\ngoing on.\n\nI recall having looked at that stuff recently, and IIRC the general\nstructure of an array inside the backend is\n\n\tOverall length word\t\t(required for any VARLENA type)\n\ta couple words of fixed overhead\n\tdimension info array (1 entry per dimension)\n\tarray elements, in sequence\n\nI don't recall the sequence that's used (row or column major). Also,\nI think the array elements are aligned on INTALIGN boundaries, which\nis pretty bogus --- arrays of doubles would fail on a lot of hardware.\nThe code should either use MAXALIGN always, or better use the specific\nalignment needed for the array element type (as indicated by the pg_type\ndata).\n\nBTW, please be sure you are working with current sources and not REL6_5\nbranch. I've already fixed a bunch of parser/optimizer problems with\narrays; you shouldn't have to reinvent those changes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 1999 10:05:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] need help for array appending & deleting "
}
] |
[
{
"msg_contents": "Also (coming late into this conversation), Array support in the JDBC2\ndriver is on the cards for 6.6.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 01 September 1999 15:05\nTo: Peter Blazso\nCc: [email protected]\nSubject: Re: [HACKERS] need help for array appending & deleting \n\n\nPeter Blazso <[email protected]> writes:\n> Unless it exists, I'd like to implement code for the array\nmanipulations\n> above for Postgresql. and I need some help for it. I have already run\n> through the sources concerned but I need to be pointed to the right\n> direction on where to start and what kind of functions I should use.\nI'd\n> also like to have a bit more detailed info on array structure (or at\n> least where I can find a good doc. I've been looking for it in\n> 'array.h', 'arrayutils.c' and 'arrayfuncs.c').\n\nWhat's in the code is all there is :-(. Please consider improving the\ndocumentation once you have studied the code enough to understand what's\ngoing on.\n\nI recall having looked at that stuff recently, and IIRC the general\nstructure of an array inside the backend is\n\n\tOverall length word\t\t(required for any VARLENA type)\n\ta couple words of fixed overhead\n\tdimension info array (1 entry per dimension)\n\tarray elements, in sequence\n\nI don't recall the sequence that's used (row or column major). Also,\nI think the array elements are aligned on INTALIGN boundaries, which\nis pretty bogus --- arrays of doubles would fail on a lot of hardware.\nThe code should either use MAXALIGN always, or better use the specific\nalignment needed for the array element type (as indicated by the pg_type\ndata).\n\nBTW, please be sure you are working with current sources and not REL6_5\nbranch. I've already fixed a bunch of parser/optimizer problems with\narrays; you shouldn't have to reinvent those changes.\n\n\t\t\tregards, tom lane\n\n************\n",
"msg_date": "Wed, 1 Sep 1999 15:33:55 +0100 ",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] need help for array appending & deleting "
}
] |
[
{
"msg_contents": "This is a lost Vadim's patch for 6.5X tree\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Mon, 09 Aug 1999 09:43:29 +0800\nFrom: Vadim Mikheev <[email protected]>\nTo: [email protected]\nSubject: [PATCHES] patch for 6.5.X tree\n\nSorry, I haven't 6.5.X tree on my host - could someone\napply patch below? TIA.\nThis is to re-use space on index pages freed by vacuum.\n\nVadim\n\n*** nbtinsert.c.orig\tFri Aug 6 22:04:40 1999\n--- nbtinsert.c\tFri Aug 6 22:10:40 1999\n***************\n*** 392,408 ****\n \t\tbool\t\tis_root = lpageop->btpo_flags & BTP_ROOT;\n \n \t\t/*\n! \t\t * If we have to split leaf page in the chain of duplicates by new\n! \t\t * duplicate then we try to look at our right sibling first.\n \t\t */\n \t\tif ((lpageop->btpo_flags & BTP_CHAIN) &&\n \t\t\t(lpageop->btpo_flags & BTP_LEAF) && keys_equal)\n \t\t{\n- \t\t\tbool\t\tuse_left = true;\n- \n \t\t\trbuf = _bt_getbuf(rel, lpageop->btpo_next, BT_WRITE);\n \t\t\trpage = BufferGetPage(rbuf);\n \t\t\trpageop = (BTPageOpaque) PageGetSpecialPointer(rpage);\n \t\t\tif (!P_RIGHTMOST(rpageop))\t/* non-rightmost page */\n \t\t\t{\t\t\t\t\t/* If we have the same hikey here then\n \t\t\t\t\t\t\t\t * it's yet another page in chain. */\n--- 392,409 ----\n \t\tbool\t\tis_root = lpageop->btpo_flags & BTP_ROOT;\n \n \t\t/*\n! \t\t * Instead of splitting leaf page in the chain of duplicates \n! \t\t * by new duplicate, insert it into some right page.\n \t\t */\n \t\tif ((lpageop->btpo_flags & BTP_CHAIN) &&\n \t\t\t(lpageop->btpo_flags & BTP_LEAF) && keys_equal)\n \t\t{\n \t\t\trbuf = _bt_getbuf(rel, lpageop->btpo_next, BT_WRITE);\n \t\t\trpage = BufferGetPage(rbuf);\n \t\t\trpageop = (BTPageOpaque) PageGetSpecialPointer(rpage);\n+ \t\t\t/* \n+ \t\t\t * some checks \n+ \t\t\t */\n \t\t\tif (!P_RIGHTMOST(rpageop))\t/* non-rightmost page */\n \t\t\t{\t\t\t\t\t/* If we have the same hikey here then\n \t\t\t\t\t\t\t\t * it's yet another page in chain. */\n***************\n*** 418,449 ****\n \t\t\t\t\t\t\t\t\t BTGreaterStrategyNumber))\n \t\t\t\t\telog(FATAL, \"btree: hikey is out of order\");\n \t\t\t\telse if (rpageop->btpo_flags & BTP_CHAIN)\n- \n \t\t\t\t\t/*\n \t\t\t\t\t * If hikey > scankey then it's last page in chain and\n \t\t\t\t\t * BTP_CHAIN must be OFF\n \t\t\t\t\t */\n \t\t\t\t\telog(FATAL, \"btree: lost last page in the chain of duplicates\");\n- \n- \t\t\t\t/* if there is room here then we use this page. */\n- \t\t\t\tif (PageGetFreeSpace(rpage) > itemsz)\n- \t\t\t\t\tuse_left = false;\n \t\t\t}\n \t\t\telse\n /* rightmost page */\n \t\t\t{\n \t\t\t\tAssert(!(rpageop->btpo_flags & BTP_CHAIN));\n- \t\t\t\t/* if there is room here then we use this page. */\n- \t\t\t\tif (PageGetFreeSpace(rpage) > itemsz)\n- \t\t\t\t\tuse_left = false;\n \t\t\t}\n! \t\t\tif (!use_left)\t\t/* insert on the right page */\n! \t\t\t{\n! \t\t\t\t_bt_relbuf(rel, buf, BT_WRITE);\n! \t\t\t\treturn (_bt_insertonpg(rel, rbuf, stack, keysz,\n! \t\t\t\t\t\t\t\t\t scankey, btitem, afteritem));\n! \t\t\t}\n! \t\t\t_bt_relbuf(rel, rbuf, BT_WRITE);\n \t\t}\n \n \t\t/*\n--- 419,438 ----\n \t\t\t\t\t\t\t\t\t BTGreaterStrategyNumber))\n \t\t\t\t\telog(FATAL, \"btree: hikey is out of order\");\n \t\t\t\telse if (rpageop->btpo_flags & BTP_CHAIN)\n \t\t\t\t\t/*\n \t\t\t\t\t * If hikey > scankey then it's last page in chain and\n \t\t\t\t\t * BTP_CHAIN must be OFF\n \t\t\t\t\t */\n \t\t\t\t\telog(FATAL, \"btree: lost last page in the chain of duplicates\");\n \t\t\t}\n \t\t\telse\n /* rightmost page */\n \t\t\t{\n \t\t\t\tAssert(!(rpageop->btpo_flags & BTP_CHAIN));\n \t\t\t}\n! \t\t\t_bt_relbuf(rel, buf, BT_WRITE);\n! \t\t\treturn (_bt_insertonpg(rel, rbuf, stack, keysz,\n! \t\t\t\t\t\t\t\t scankey, btitem, afteritem));\n \t\t}\n \n \t\t/*",
"msg_date": "Wed, 1 Sep 1999 20:27:45 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCHES] patch for 6.5.X tree (fwd)"
},
{
"msg_contents": "\nOkay, I'm going by the fact that Vadim approved this, and am applying it\nright now...\n\nOn Wed, 1 Sep 1999, Oleg Bartunov wrote:\n\n> This is a lost Vadim's patch for 6.5X tree\n> \n> \tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 09 Aug 1999 09:43:29 +0800\n> From: Vadim Mikheev <[email protected]>\n> To: [email protected]\n> Subject: [PATCHES] patch for 6.5.X tree\n> \n> Sorry, I haven't 6.5.X tree on my host - could someone\n> apply patch below? TIA.\n> This is to re-use space on index pages freed by vacuum.\n> \n> Vadim\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 14:52:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] patch for 6.5.X tree (fwd)"
}
] |
[
{
"msg_contents": "Anyone looking into where these are coming from? They seem to be coming\nfrom the news server on hub for some reason.\n\n----- Forwarded message from Hub.Org News Admin -----\n",
"msg_date": "Wed, 1 Sep 1999 15:23:05 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Funny mail"
},
{
"msg_contents": "\nI thought I had fixed it the other day too *sigh* Am diving back into\nit...\n\nOn Wed, 1 Sep 1999, D'Arcy J.M. Cain wrote:\n\n> Anyone looking into where these are coming from? They seem to be coming\n> from the news server on hub for some reason.\n> \n> ----- Forwarded message from Hub.Org News Admin -----\n> \n> >From hub.org!owner-pgsql-hackers Wed Sep 1 13:27:26 1999\n> Date: Wed, 1 Sep 1999 07:29:55 -0400 (EDT)\n> From: \"Hub.Org News Admin\" <[email protected]>\n> Message-Id: <[email protected]>\n> X-Authentication-Warning: hub.org: news set sender to <news> using -f\n> To: undisclosed-recipients:;\n> Sender: [email protected]\n> Precedence: bulk\n> \n> \n> ************\n> \n> ----- End of forwarded message from Hub.Org News Admin -----\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 17:06:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Funny mail"
},
{
"msg_contents": "\nFound it...\"bug\" in the install for INN where it doesn't update anyting in\nthe site directory, even scripts that aren't \"site specific\"...news2mail\n(the old one) wasn't calling 'sm' to get the article, so was jus tsending\nout blanks...\n\nThis should be fixed...\n\nOn Wed, 1 Sep 1999, D'Arcy J.M. Cain wrote:\n\n> Anyone looking into where these are coming from? They seem to be coming\n> from the news server on hub for some reason.\n> \n> ----- Forwarded message from Hub.Org News Admin -----\n> \n> >From hub.org!owner-pgsql-hackers Wed Sep 1 13:27:26 1999\n> Date: Wed, 1 Sep 1999 07:29:55 -0400 (EDT)\n> From: \"Hub.Org News Admin\" <[email protected]>\n> Message-Id: <[email protected]>\n> X-Authentication-Warning: hub.org: news set sender to <news> using -f\n> To: undisclosed-recipients:;\n> Sender: [email protected]\n> Precedence: bulk\n> \n> \n> ************\n> \n> ----- End of forwarded message from Hub.Org News Admin -----\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 1 Sep 1999 17:13:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Funny mail"
}
] |
[
{
"msg_contents": "Hiroshi spotted the fundamental problem we were having:\nRelationFlushRelation would happily flush a relation-cache\nentry that still had an open file entry at the md.c and fd.c\nlevels. This resulted in a permanent leak of md and vfd\nfile descriptors, which was most easily observable as a leakage\nof kernel file descriptors (though fd.c would eventually\nrecycle same). smgrclose() in RelationFlushRelation fixes it.\n\nWhile I was poking at this I found a number of other problems\nin md.c having to do with multiple-segment relations. I believe\nthey're all fixed now. I have been able to run initdb and the\nregression tests with a 64Kb segment size, which never worked\nbefore.\n\nI stuck my neck out to the extent of committing these changes\ninto 6.5.* as well as current. I'd recommend a few more days\nof beta-testing before we release 6.5.2 ;-). Marc, can you\nmake a new 6.5.2 candidate tarball?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 00:36:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "md.c is feeling much better now, thank you"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Thursday, September 02, 1999 1:36 PM\n> To: [email protected]\n> Subject: [HACKERS] md.c is feeling much better now, thank you\n> \n> \n> Hiroshi spotted the fundamental problem we were having:\n> RelationFlushRelation would happily flush a relation-cache\n> entry that still had an open file entry at the md.c and fd.c\n> levels. This resulted in a permanent leak of md and vfd\n> file descriptors, which was most easily observable as a leakage\n> of kernel file descriptors (though fd.c would eventually\n> recycle same). smgrclose() in RelationFlushRelation fixes it.\n>\n\nThanks.\n\nBut I'm unhappy with your change for mdtruncate().\nIt's still dangerous to unlink unwanted segments in mdtruncte().\n\nStartTransaction() and CommandCounterIncrement() trigger\nrelation cache invalidation. Unfortunately those are insufficient \nto prevent backends from inserting into invalid relations.\n\nFor exmaple\n \nIf a backend is blocked by vacuum,it would insert into the target \nrelation without relation cache invalidation after vacuum.\n\nIt seems that other triggers are necessary around LockRelation(). \n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 2 Sep 1999 19:32:02 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "\nI think that, based on this, the changes should be back'd out of v6.5.2\nuntil further testing and analysis can be done. If we have to, we can do\na v6.5.3 at a later date, if you want to get this in then...\n\nOn Thu, 2 Sep 1999, Hiroshi Inoue wrote:\n\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> > Sent: Thursday, September 02, 1999 1:36 PM\n> > To: [email protected]\n> > Subject: [HACKERS] md.c is feeling much better now, thank you\n> > \n> > \n> > Hiroshi spotted the fundamental problem we were having:\n> > RelationFlushRelation would happily flush a relation-cache\n> > entry that still had an open file entry at the md.c and fd.c\n> > levels. This resulted in a permanent leak of md and vfd\n> > file descriptors, which was most easily observable as a leakage\n> > of kernel file descriptors (though fd.c would eventually\n> > recycle same). smgrclose() in RelationFlushRelation fixes it.\n> >\n> \n> Thanks.\n> \n> But I'm unhappy with your change for mdtruncate().\n> It's still dangerous to unlink unwanted segments in mdtruncte().\n> \n> StartTransaction() and CommandCounterIncrement() trigger\n> relation cache invalidation. Unfortunately those are insufficient \n> to prevent backends from inserting into invalid relations.\n> \n> For exmaple\n> \n> If a backend is blocked by vacuum,it would insert into the target \n> relation without relation cache invalidation after vacuum.\n> \n> It seems that other triggers are necessary around LockRelation(). \n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Sep 1999 09:43:12 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> StartTransaction() and CommandCounterIncrement() trigger\n> relation cache invalidation. Unfortunately those are insufficient \n> to prevent backends from inserting into invalid relations.\n>\n> If a backend is blocked by vacuum,it would insert into the target \n> relation without relation cache invalidation after vacuum.\n\nIf that's true, then we have problems far worse than whether mdtruncate\nhas tried to unlink the segment. What you are saying is that another\nbackend can attempt to do an insert/update on a relation being vacuumed,\nand have gotten as far as deciding which block it's going to insert the\ntuple into before it gets blocked by vacuum's AccessExclusiveLock.\nIf so, that is *broken* and it's not mdtruncate's fault. What happens\nif vacuum fills up the chosen block with moved tuples?\n\nI did indeed wonder whether relation cache inval will do the right\nthing when another backend is already waiting to access the relation\nbeing invalidated --- but if it does not, we have to fix the inval\nmechanism. mdtruncate is the least of our worries.\n\nVadim, any comments here?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 09:27:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "I wrote:\n> \"Hiroshi Inoue\" <[email protected]> writes:\n>> StartTransaction() and CommandCounterIncrement() trigger\n>> relation cache invalidation. Unfortunately those are insufficient \n>> to prevent backends from inserting into invalid relations.\n\n> If that's true, then we have problems far worse than whether mdtruncate\n> has tried to unlink the segment.\n\nI poked at this a little bit and found that for the VACUUM case,\nRelationFlushRelation in the other backend (the one waiting to\ninsert/update) occurs here:\n\n#0 RelationFlushRelation (relationPtr=0x7b034824,\n onlyFlushReferenceCountZero=1 '\\001') at relcache.c:1259\n#1 0x158a60 in RelationIdInvalidateRelationCacheByRelationId (\n relationId=272146) at relcache.c:1368\n#2 0x156ba0 in CacheIdInvalidate (cacheId=1259, hashIndex=272146, pointer=0x0)\n at inval.c:323\n#3 0x11673c in SIReadEntryData (segP=0x80da1000, backendId=-2133183664,\n invalFunction=0x4000c692 <SSNAN+9066>,\n resetFunction=0x4000c69a <SSNAN+9074>) at sinvaladt.c:649\n#4 0x115e6c in InvalidateSharedInvalid (invalFunction=0x80da1000,\n resetFunction=0x4000c69a <SSNAN+9074>) at sinval.c:164\n#5 0x156e54 in DiscardInvalid () at inval.c:518\n#6 0x94354 in AtStart_Cache () at xact.c:548\n#7 0x94314 in CommandCounterIncrement () at xact.c:514\n#8 0x121218 in pg_exec_query_dest (\n query_string=0x40079580 \"insert into tenk1 values(19999,1234);\",\n dest=Remote, aclOverride=0 '\\000') at postgres.c:726\n\nwhich looks good ... except that the CommandCounterIncrement()\noccurs *after* the insert has executed. So we've got a problem\nhere.\n\nIn the DROP TABLE scenario, things seem to be broken independently\nof md.c. I tried this:\n\nBACKEND #1:\n\tbegin;\n\tlock table tenk1;\nBACKEND #2:\n\tinsert into tenk1 values(29999,1234);\n\t-- backend #2 hangs waiting for lock\nBACKEND #1:\n\tdrop table tenk1;\n\tend;\n\nBackend #2 now suffers an assert failure:\n\n#6 0x15b8c4 in ExceptionalCondition (\n conditionName=0x28898 \"!((((PageHeader) ((PageHeader) pageHeader))->pd_upper == 0))\", exceptionP=0x40009a58, detail=0x0, fileName=0x7ae4 \"\\003\",\n lineNumber=136) at assert.c:72\n#7 0x7c470 in RelationPutHeapTupleAtEnd (relation=0x400e8a40,\n tuple=0x401127a0) at hio.c:136\n#8 0x7aa48 in heap_insert (relation=0x400e8a40, tup=0x401127a0)\n at heapam.c:1086\n#9 0xb87e4 in ExecAppend (slot=0x4010a078, tupleid=0x200, estate=0x40109e98)\n at execMain.c:1190\n#10 0xb8630 in ExecutePlan (estate=0x40109e98, plan=0x40109860,\n operation=CMD_INSERT, offsetTuples=0, numberTuples=0,\n direction=ForwardScanDirection, destfunc=0x40112730) at execMain.c:1064\n#11 0xb7b6c in ExecutorRun (queryDesc=0x40109e80, estate=0x40109e98,\n feature=3, limoffset=0x0, limcount=0x0) at execMain.c:329\n#12 0x12294c in ProcessQueryDesc (queryDesc=0x40109e80, limoffset=0x0,\n limcount=0x0) at pquery.c:315\n#13 0x1229f4 in ProcessQuery (parsetree=0x400e42d0, plan=0x40109860,\n dest=Local) at pquery.c:358\n#14 0x1211dc in pg_exec_query_dest (\n query_string=0x40079580 \"insert into tenk1 values(29999,1234);\",\n dest=Remote, aclOverride=2 '\\002') at postgres.c:710\n\nwhich hardly looks like it can be blamed on md.c either.\n\nMy guess is that we ought to be checking for relcache invalidation\nimmediately after gaining any lock on the relation. I don't know where\nthat should be done, however.\n\nPerhaps we also ought to make RelationFlushRelation do smgrclose()\nunconditionally, regardless of the reference-count test. If the\nrelation is still in use, that should be OK --- md.c will reopen\nthe files automatically on the next access.\n\n\nBTW, it appears that DROP TABLE physically deletes the relation\n*immediately*, which means that aborting a transaction that contains\na DROP TABLE does not work. But we knew that, didn't we?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 10:05:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> I think that, based on this, the changes should be back'd out of v6.5.2\n> until further testing and analysis can be done.\n\nIf we can't find a solution to the inval-too-late problem pronto,\nwhat we can do is comment out the FileUnlink call in mdtruncate.\nI don't see a need to back out the other fixes in md.c.\n\nBut I think we ought to fix the underlying problem, not this symptom.\nWhat we now see is that after one backend has done something that\nrequires invalidating a relcache entry, another backend is able\nto complete an entire query using the *old* relcache info before it\nnotices the shared-inval signal. That's got to have bad consequences\nfor more than just md.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 10:43:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "> BTW, it appears that DROP TABLE physically deletes the relation\n> *immediately*, which means that aborting a transaction that contains\n> a DROP TABLE does not work. But we knew that, didn't we?\n\nYes, on TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Sep 1999 11:22:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> My guess is that we ought to be checking for relcache invalidation\n> immediately after gaining any lock on the relation. I don't know where\n> that should be done, however.\n\nSeems as GOOD solution!\nWe could do inval check in LockRelation() just after LockAcquire().\n\nVadim\n",
"msg_date": "Thu, 02 Sep 1999 23:32:59 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> My guess is that we ought to be checking for relcache invalidation\n>> immediately after gaining any lock on the relation. I don't know where\n>> that should be done, however.\n\n> Seems as GOOD solution!\n> We could do inval check in LockRelation() just after LockAcquire().\n\nI tried inserting code like this in LockRelation:\n\n--- 163,176 ----\n tag.objId.blkno = InvalidBlockNumber;\n\n LockAcquire(LockTableId, &tag, lockmode);\n!\n! /* Check to make sure the relcache entry hasn't been invalidated\n! * while we were waiting to lock it.\n! */\n! DiscardInvalid();\n! if (relation != RelationIdGetRelation(tag.relId))\n! elog(ERROR, \"LockRelation: relation %u deleted while waiting to\nlock it\",\n! tag.relId);\n }\n\n /*\n\nand moving the smgrclose() call in RelationFlushRelation so that it is\ncalled unconditionally.\n\nDoesn't work though: the ALTER TABLE tests in regress/misc fail.\nApparently, this change causes the sinval report from update of the\nrelation's pg_class heap entry to be read while the relation has refcnt>0,\nso RelationFlushRelation doesn't flush it, so we have an obsolete\nrelation cache entry. Ooops.\n\nDid you have a different approach in mind? Or do we need to redesign\nRelationFlushRelation so that it rebuilds the relcache entry, rather\nthan dropping it, if refcnt > 0?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 19:02:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Thursday, September 02, 1999 11:44 PM\n> To: The Hermit Hacker\n> Cc: Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] md.c is feeling much better now, thank you \n> \n> \n> The Hermit Hacker <[email protected]> writes:\n> > I think that, based on this, the changes should be back'd out of v6.5.2\n> > until further testing and analysis can be done.\n> \n> If we can't find a solution to the inval-too-late problem pronto,\n> what we can do is comment out the FileUnlink call in mdtruncate.\n> I don't see a need to back out the other fixes in md.c.\n>\n\nI think so too.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Fri, 3 Sep 1999 12:05:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Friday, September 03, 1999 12:22 AM\n> To: Tom Lane\n> Cc: Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] md.c is feeling much better now, thank you\n> \n> \n> > BTW, it appears that DROP TABLE physically deletes the relation\n> > *immediately*, which means that aborting a transaction that contains\n> > a DROP TABLE does not work. But we knew that, didn't we?\n> \n> Yes, on TODO.\n>\n\nHmm,Data Define commands are unrecoverable in many DBMS-s. \nIs it necessary to allow PostgreSQL to execute Data Define\ncommands inside transactions ?\nIf so,is it possible ? \n\nFor example,should the following be possible ?\n\n[A table t exists.]\n\nbegin;\ndrop table t;\ncreate table t (dt int4);\ninsert into t values (1);\ndrop table t;\nabort;\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n",
"msg_date": "Fri, 3 Sep 1999 12:07:03 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Friday, September 03, 1999 8:03 AM\n> To: Vadim Mikheev\n> Cc: Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] md.c is feeling much better now, thank you\n>\n>\n> Vadim Mikheev <[email protected]> writes:\n> > Tom Lane wrote:\n> >> My guess is that we ought to be checking for relcache invalidation\n> >> immediately after gaining any lock on the relation. I don't know where\n> >> that should be done, however.\n>\n> > Seems as GOOD solution!\n> > We could do inval check in LockRelation() just after LockAcquire().\n>\n> I tried inserting code like this in LockRelation:\n>\n> --- 163,176 ----\n> tag.objId.blkno = InvalidBlockNumber;\n>\n> LockAcquire(LockTableId, &tag, lockmode);\n> !\n> ! /* Check to make sure the relcache entry hasn't been invalidated\n> ! * while we were waiting to lock it.\n> ! */\n> ! DiscardInvalid();\n> ! if (relation != RelationIdGetRelation(tag.relId))\n> ! elog(ERROR, \"LockRelation: relation %u deleted\n> while waiting to\n> lock it\",\n> ! tag.relId);\n> }\n>\n> /*\n>\n> and moving the smgrclose() call in RelationFlushRelation so that it is\n> called unconditionally.\n>\n> Doesn't work though: the ALTER TABLE tests in regress/misc fail.\n> Apparently, this change causes the sinval report from update of the\n> relation's pg_class heap entry to be read while the relation has refcnt>0,\n> so RelationFlushRelation doesn't flush it, so we have an obsolete\n> relation cache entry. Ooops.\n>\n\nHow about inserting \"RelationDecrementReferenceCount(relation);\"\nbetween LockAcquire() and DiscardInvalid() ?\nAnd isn't it preferable that LockRelation() returns the relation\nwhich RelationIdGetRelation(tag.relId) returns ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Fri, 3 Sep 1999 13:41:19 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Doesn't work though: the ALTER TABLE tests in regress/misc fail.\n>> Apparently, this change causes the sinval report from update of the\n>> relation's pg_class heap entry to be read while the relation has refcnt>0,\n>> so RelationFlushRelation doesn't flush it, so we have an obsolete\n>> relation cache entry. Ooops.\n\n> How about inserting \"RelationDecrementReferenceCount(relation);\"\n> between LockAcquire() and DiscardInvalid() ?\n\nWould only help if the relation had been opened exactly once before\nthe lock; not if its refcnt is greater than 1. Worse, it would only\nhelp for the particular relation being locked, but we might receive\nan sinval report for a different already-locked relation.\n\n> And isn't it preferable that LockRelation() returns the relation\n> which RelationIdGetRelation(tag.relId) returns ?\n\nNo, because that would only inform the immediate caller of LockRelation\nof a change. This is insufficient for both the reasons mentioned above.\nFor that matter, my first-cut patch is insufficient, because it\nwon't detect the case that a relcache entry other than the one\ncurrently being locked has been flushed.\n\nI think what we need to do is revise RelationFlushRelation so that\nit (a) deletes the relcache entry if its refcnt is zero; otherwise\n(b) leaves the relcache entry in existence, but recomputes all\nits contents and subsidiary data structures, and (c) if, while\ntrying to recompute the contents, it finds that the relation has\nactually been deleted, then it's elog(ERROR) time. In this way,\nexisting pointers to the relcache entry --- which might belong to\nroutines very far down the call stack --- remain valid, or else\nwe elog() if they aren't.\n\nWe might still have a few bugs with routines that copy data out of the\nrelcache entry before locking it, but I don't recall having seen any\ncode like that. Most of the code seems to do heap_open immediately\nfollowed by LockRelation, and that should be safe.\n\nI'd like to hear Vadim's opinion before wading in, but this seems\nlike it ought to be workable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 01:50:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,Data Define commands are unrecoverable in many DBMS-s. \n\nTrue.\n\n> For example,should the following be possible ?\n\n> [A table t exists.]\n\n> begin;\n> drop table t;\n> create table t (dt int4);\n> insert into t values (1);\n> drop table t;\n> abort;\n\nI don't mind if that is rejected --- but it ought to be rejected\ncleanly, rather than leaving a broken table behind.\n\nIIRC, it is fairly easy to tell from the xact.c state whether we are\ninside a BEGIN block. Maybe DROP TABLE and anything else that has\nnonreversible side effects ought to simply elog(ERROR) if called inside\na BEGIN block. We'd need to be a little careful though, since I think\nDROP TABLE on a temp table created in the same transaction ought to\nwork.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 10:46:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "I have just committed changes that address the problem of relcache\nentries not being updated promptly after another backend issues\na shared-invalidation report. LockRelation() now checks for sinval\nreports just after acquiring a lock, and the relcache entry will be\nupdated if necessary --- or, if the relation has actually disappeared\nentirely, an elog(ERROR) will occur.\n\nAs a side effect of the relcache update, the underlying md.c/fd.c files\nwill be closed, and then reopened if necessary. This should solve our\nconcerns about vacuum.c not being able to truncate relations safely.\n\nThere is still some potential for misbehavior as a result of the fact\nthat the parser looks at relcache entries without bothering to obtain\nany kind of lock on the relation. For example:\n\n-- In backend #1:\nregression=> create table z1 (f1 int4);\nCREATE\nregression=> select * from z1;\nf1\n--\n(0 rows)\n\nregression=> begin;\nBEGIN\n\n-- In backend #2:\nregression=> alter table z1 add column f2 int4;\nADD\nregression=>\n\n-- In backend #1:\nregression=> select * from z1;\nf1\n--\n(0 rows)\n\n-- parser uses un-updated relcache entry and sees only one column in z1.\n-- However, the relcache *will* get updated as soon as we either lock a\n-- table or do the CommandCounterIncrement() at end of query, so a second\n-- try sees the new info:\nregression=> select * from z1;\nf1|f2\n--+--\n(0 rows)\n\nregression=> end;\nEND\n\nThe window for problems is pretty small: you have to be within a\ntransaction (otherwise the StartTransaction will notice the sinval\nreport), and your very first query after the other backend does\nALTER TABLE has to reference the altered table. So I'm not sure\nthis is worth worrying about. But perhaps the parser ought to obtain\nthe weakest possible lock on each table referenced in a query before\nit does any looking at the attributes of the table. Comments?\n\n\nI believe these changes ought to be committed into REL6_5 as well,\nbut it might be wise to test them a little more in current first.\nOr would people find it easier to test them against 6.5 databases?\nIn that case maybe I should just commit them now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 15:05:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "> The window for problems is pretty small: you have to be within a\n> transaction (otherwise the StartTransaction will notice the sinval\n> report), and your very first query after the other backend does\n> ALTER TABLE has to reference the altered table. So I'm not sure\n> this is worth worrying about. But perhaps the parser ought to obtain\n> the weakest possible lock on each table referenced in a query before\n> it does any looking at the attributes of the table. Comments?\n\nGood question. How do other db's handle such a case? I hesitate to do\nlocking for parser lookups. Seems live more lock overhead.\n\n\n> I believe these changes ought to be committed into REL6_5 as well,\n> but it might be wise to test them a little more in current first.\n> Or would people find it easier to test them against 6.5 databases?\n> In that case maybe I should just commit them now...\n\nSeems it should be 6.6 only. Too obscure a bug. Could introduce a bug.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 15:11:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "> I have just committed changes that address the problem of relcache\n> entries not being updated promptly after another backend issues\n> a shared-invalidation report. LockRelation() now checks for sinval\n> reports just after acquiring a lock, and the relcache entry will be\n> updated if necessary --- or, if the relation has actually disappeared\n> entirely, an elog(ERROR) will occur.\n> \n> As a side effect of the relcache update, the underlying md.c/fd.c files\n> will be closed, and then reopened if necessary. This should solve our\n> concerns about vacuum.c not being able to truncate relations safely.\n> \n> There is still some potential for misbehavior as a result of the fact\n> that the parser looks at relcache entries without bothering to obtain\n> any kind of lock on the relation. For example:\n> \n> -- In backend #1:\n> regression=> create table z1 (f1 int4);\n> CREATE\n> regression=> select * from z1;\n> f1\n> --\n> (0 rows)\n> \n> regression=> begin;\n> BEGIN\n> \n> -- In backend #2:\n> regression=> alter table z1 add column f2 int4;\n> ADD\n> regression=>\n> \n> -- In backend #1:\n> regression=> select * from z1;\n> f1\n> --\n> (0 rows)\n> \n> -- parser uses un-updated relcache entry and sees only one column in z1.\n> -- However, the relcache *will* get updated as soon as we either lock a\n> -- table or do the CommandCounterIncrement() at end of query, so a second\n> -- try sees the new info:\n> regression=> select * from z1;\n> f1|f2\n> --+--\n> (0 rows)\n> \n> regression=> end;\n> END\n> \n> The window for problems is pretty small: you have to be within a\n> transaction (otherwise the StartTransaction will notice the sinval\n> report), and your very first query after the other backend does\n> ALTER TABLE has to reference the altered table. So I'm not sure\n> this is worth worrying about. But perhaps the parser ought to obtain\n> the weakest possible lock on each table referenced in a query before\n> it does any looking at the attributes of the table. Comments?\n\nOk. I will give another example that seems more serious.\n\ntest=> begin;\nBEGIN\ntest=> create table t1(i int);\nCREATE\n-- a table file named \"t1\" is created.\ntest=> aaa;\nERROR: parser: parse error at or near \"aaa\"\n-- transaction is aborted and the table file t1 is unlinked.\ntest=> select * from t1;\n-- but parser doesn't know t1 does not exist any more.\n-- it tries to open t1 using mdopen(). (see including backtrace)\n-- mdopen() tries to open t1 and fails. In this case mdopen()\n-- creates t1!\nNOTICE: (transaction aborted): queries ignored until END\n*ABORT STATE*\ntest=> end;\nEND\ntest=> create table t1(i int); \nERROR: cannot create t1\n-- since relation file t1 already exists.\ntest=> \nEOF\n[t-ishii@ext16 src]$ !!\npsql test\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.6.0 on powerpc-unknown-linux-gnu, compiled by gcc egcs-2.90.25 980302 (egcs-1.0.2 prerelease)]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: test\n\ntest=> select * from t1;\n\nERROR: t1: Table does not exist.\ntest=> create table t1(i int);\n\nERROR: cannot create t1\n-- again, since relation file t1 already exists.\n-- user would never be able to create t1!\n\nI think the long range solution would be let parser obtain certain\nlocks as Tom said. Until that I propose following solution. It looks\nsimple, safe and would be neccessary anyway (I don't know why that\ncheck had not been implemented). See included patches.\n\n---------------------------- backtrace -----------------------------\n#0 mdopen (reln=0x1a9af18) at md.c:279\n#1 0x18cb784 in smgropen (which=425, reln=0xbfffdef0) at smgr.c:185\n#2 0x18cb784 in smgropen (which=0, reln=0x1a9af18) at smgr.c:185\n#3 0x1904c1c in RelationBuildDesc ()\n#4 0x1905360 in RelationNameGetRelation ()\n#5 0x18259a4 in heap_openr ()\n#6 0x187f59c in addRangeTableEntry ()\n#7 0x1879cb0 in transformTableEntry ()\n#8 0x1879d40 in parseFromClause ()\n#9 0x1879a90 in makeRangeTable ()\n#10 0x1871fd8 in transformSelectStmt ()\n#11 0x1870d14 in transformStmt ()\n#12 0x18709e0 in parse_analyze ()\n#13 0x18792d4 in parser ()\n#14 0x18cd158 in pg_parse_and_plan ()\n#15 0x18cd5c0 in pg_exec_query_dest ()\n#16 0x18cd524 in pg_exec_query ()\n#17 0x18ce9ac in PostgresMain ()\n#18 0x18a5994 in DoBackend ()\n#19 0x18a53c8 in BackendStartup ()\n#20 0x18a46d0 in ServerLoop ()\n#21 0x18a4108 in PostmasterMain ()\n#22 0x1870928 in main ()\n---------------------------- backtrace -----------------------------\n\n---------------------------- patches -----------------------------\n*** md.c~\tSun Sep 5 08:41:28 1999\n--- md.c\tSun Sep 5 11:01:57 1999\n***************\n*** 286,296 ****\n--- 286,303 ----\n \n \t/* this should only happen during bootstrap processing */\n \tif (fd < 0)\n+ \t{\n+ \t\tif (!IsBootstrapProcessingMode())\n+ \t\t{\n+ \t\t\telog(ERROR,\"Couldn't open %s\\n\", path);\n+ \t\t\treturn -1;\n+ \t\t}\n #ifndef __CYGWIN32__\n \t\tfd = FileNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL, 0600);\n #else\n \t\tfd = FileNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL | O_BINARY, 0600);\n #endif\n+ \t}\n \n \tvfd = _fdvec_alloc();\n \tif (vfd < 0)\n",
"msg_date": "Sun, 05 Sep 1999 11:25:03 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Ok. I will give another example that seems more serious.\n> test=> aaa;\n> ERROR: parser: parse error at or near \"aaa\"\n> -- transaction is aborted and the table file t1 is unlinked.\n> test=> select * from t1;\n> -- but parser doesn't know t1 does not exist any more.\n> -- it tries to open t1 using mdopen(). (see including backtrace)\n> -- mdopen() tries to open t1 and fails. In this case mdopen()\n> -- creates t1!\n> NOTICE: (transaction aborted): queries ignored until END\n> *ABORT STATE*\n\nHmm. It seems a more straightforward solution would be to alter\npg_parse_and_plan so that the parser isn't even called if we have\nalready failed the current transaction; that is, the \"queries ignored\"\ntest should occur sooner. I'm rather surprised to realize that\nwe do run the parser in this situation...\n\n> I think the long range solution would be let parser obtain certain\n> locks as Tom said.\n\nThat would not solve this particular problem, since the lock manager\nwouldn't know any better than the parser that the table doesn't really\nexist any more.\n\n> Until that I propose following solution. It looks\n> simple, safe and would be neccessary anyway (I don't know why that\n> check had not been implemented). See included patches.\n\nThis looks like it might be a good change, but I'm not quite as sure\nas you are that it won't have any bad effects. Have you tested it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Sep 1999 11:33:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > Ok. I will give another example that seems more serious.\n> > test=> aaa;\n> > ERROR: parser: parse error at or near \"aaa\"\n> > -- transaction is aborted and the table file t1 is unlinked.\n> > test=> select * from t1;\n> > -- but parser doesn't know t1 does not exist any more.\n> > -- it tries to open t1 using mdopen(). (see including backtrace)\n> > -- mdopen() tries to open t1 and fails. In this case mdopen()\n> > -- creates t1!\n> > NOTICE: (transaction aborted): queries ignored until END\n> > *ABORT STATE*\n> \n> Hmm. It seems a more straightforward solution would be to alter\n> pg_parse_and_plan so that the parser isn't even called if we have\n> already failed the current transaction; that is, the \"queries ignored\"\n> test should occur sooner. I'm rather surprised to realize that\n> we do run the parser in this situation...\n\nNo. we have to run the parser so that we could accept \"end\".\n\n> > I think the long range solution would be let parser obtain certain\n> > locks as Tom said.\n> \n> That would not solve this particular problem, since the lock manager\n> wouldn't know any better than the parser that the table doesn't really\n> exist any more.\n\nI see.\n\n> > Until that I propose following solution. It looks\n> > simple, safe and would be neccessary anyway (I don't know why that\n> > check had not been implemented). See included patches.\n> \n> This looks like it might be a good change, but I'm not quite as sure\n> as you are that it won't have any bad effects. Have you tested it?\n\nAt least initdb and the regression test runs fine for me...\n---\nTatsuo Ishii\n",
"msg_date": "Mon, 06 Sep 1999 09:51:43 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Hmm. It seems a more straightforward solution would be to alter\n>> pg_parse_and_plan so that the parser isn't even called if we have\n>> already failed the current transaction; that is, the \"queries ignored\"\n>> test should occur sooner. I'm rather surprised to realize that\n>> we do run the parser in this situation...\n\n> No. we have to run the parser so that we could accept \"end\".\n\nAh, very good point. I stand corrected.\n\n>>>> Until that I propose following solution. It looks\n>>>> simple, safe and would be neccessary anyway (I don't know why that\n>>>> check had not been implemented). See included patches.\n>> \n>> This looks like it might be a good change, but I'm not quite as sure\n>> as you are that it won't have any bad effects. Have you tested it?\n>\n> At least initdb and the regression test runs fine for me...\n\nSame here. I have committed it into current, but not REL6_5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 1999 10:12:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
},
{
"msg_contents": "\nAny resolution on this?\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Ok. I will give another example that seems more serious.\n> > test=> aaa;\n> > ERROR: parser: parse error at or near \"aaa\"\n> > -- transaction is aborted and the table file t1 is unlinked.\n> > test=> select * from t1;\n> > -- but parser doesn't know t1 does not exist any more.\n> > -- it tries to open t1 using mdopen(). (see including backtrace)\n> > -- mdopen() tries to open t1 and fails. In this case mdopen()\n> > -- creates t1!\n> > NOTICE: (transaction aborted): queries ignored until END\n> > *ABORT STATE*\n> \n> Hmm. It seems a more straightforward solution would be to alter\n> pg_parse_and_plan so that the parser isn't even called if we have\n> already failed the current transaction; that is, the \"queries ignored\"\n> test should occur sooner. I'm rather surprised to realize that\n> we do run the parser in this situation...\n> \n> > I think the long range solution would be let parser obtain certain\n> > locks as Tom said.\n> \n> That would not solve this particular problem, since the lock manager\n> wouldn't know any better than the parser that the table doesn't really\n> exist any more.\n> \n> > Until that I propose following solution. It looks\n> > simple, safe and would be neccessary anyway (I don't know why that\n> > check had not been implemented). See included patches.\n> \n> This looks like it might be a good change, but I'm not quite as sure\n> as you are that it won't have any bad effects. Have you tested it?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:06:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any resolution on this?\n\nI believe I committed Tatsuo's change.\n\nThere is still the issue that the parser doesn't obtain any lock on\na relation during parsing, so it's possible to use a slightly stale\nrelcache entry for parsing purposes. (It can't be *really* stale,\nsince presumably we just read the SI queue during StartTransaction,\nbut still it could be wrong if someone commits an ALTER TABLE while\nwe are parsing our query.)\n\nAfter thinking about it for a while, I am not sure if we should try to\nfix this or not. The obvious fix would be to have the parser grab\nAccessShareLock on any relation as soon as it is seen in the query,\nand then keep this lock till end of transaction; that would guarantee\nthat no one else could alter the table structure and thereby invalidate\nthe parser's information about the relation. But that does not work\nbecause it guarantees deadlock if two processes both try to get\nAccessExclusiveLock, as in plain old \"BEGIN; LOCK TABLE table; ...\".\nThey'll both be holding AccessShareLock so neither can get exclusive.\n\nThere might be another way, but we need to be careful not to choose\na cure that's worse than the disease.\n\n\t\t\tregards, tom lane\n\n\n>> Tatsuo Ishii <[email protected]> writes:\n>>>> Ok. I will give another example that seems more serious.\n>>>> test=> aaa;\n>>>> ERROR: parser: parse error at or near \"aaa\"\n>>>> -- transaction is aborted and the table file t1 is unlinked.\n>>>> test=> select * from t1;\n>>>> -- but parser doesn't know t1 does not exist any more.\n>>>> -- it tries to open t1 using mdopen(). (see including backtrace)\n>>>> -- mdopen() tries to open t1 and fails. In this case mdopen()\n>>>> -- creates t1!\n>>>> NOTICE: (transaction aborted): queries ignored until END\n>>>> *ABORT STATE*\n>> \n>> Hmm. It seems a more straightforward solution would be to alter\n>> pg_parse_and_plan so that the parser isn't even called if we have\n>> already failed the current transaction; that is, the \"queries ignored\"\n>> test should occur sooner. I'm rather surprised to realize that\n>> we do run the parser in this situation...\n>> \n>>>> I think the long range solution would be let parser obtain certain\n>>>> locks as Tom said.\n>> \n>> That would not solve this particular problem, since the lock manager\n>> wouldn't know any better than the parser that the table doesn't really\n>> exist any more.\n>> \n>>>> Until that I propose following solution. It looks\n>>>> simple, safe and would be neccessary anyway (I don't know why that\n>>>> check had not been implemented). See included patches.\n>> \n>> This looks like it might be a good change, but I'm not quite as sure\n>> as you are that it won't have any bad effects. Have you tested it?\n",
"msg_date": "Tue, 28 Sep 1999 09:28:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] md.c is feeling much better now, thank you "
}
] |
[
{
"msg_contents": "Hi,\n\nI got a email from Russia with commercial questions.\nUnfortunately it's written in russian. Whom I should forward\nthis message ?\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 2 Sep 1999 14:30:49 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commercial question"
},
{
"msg_contents": "\ngood question...Vadim? Don't you speak Russian? :)\n\n\nOn Thu, 2 Sep 1999, Oleg Bartunov wrote:\n\n> Hi,\n> \n> I got a email from Russia with commercial questions.\n> Unfortunately it's written in russian. Whom I should forward\n> this message ?\n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Sep 1999 10:05:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Commercial question"
},
{
"msg_contents": "On Thu, 2 Sep 1999, The Hermit Hacker wrote:\n\n> Date: Thu, 2 Sep 1999 10:05:41 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Commercial question\n> \n> \n> good question...Vadim? Don't you speak Russian? :)\n\nOk. I'll forward it to Vadim. He certainly does speak russsian :-)\nBut question which is unclear for me is:\nDoes Postgres team has plans to certify PostgreSQL (Particularly in Russia). \nHow to certificate product based on PostgreSQL ?\n\n\n\tRegards,\n\n\t\tOleg\n\n\n> \n> \n> On Thu, 2 Sep 1999, Oleg Bartunov wrote:\n> \n> > Hi,\n> > \n> > I got a email from Russia with commercial questions.\n> > Unfortunately it's written in russian. Whom I should forward\n> > this message ?\n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > ************\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Thu, 2 Sep 1999 17:28:23 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Commercial question"
}
] |
[
{
"msg_contents": ">> > To my mind, without spaces this construction *is* ambiguous, and\nfrankly\n>> > I'd have expected the second interpretation ('+-' is a single operator\n>> > name). Almost every computer language in the world uses \"greedy\"\n>> > tokenization where the next token is the longest series of characters\n>> > that can validly be a token. I don't regard the above behavior as\n>> > predictable, natural, nor obvious. In fact, I'd say it's a bug that\n>> > \"3+-2\" and \"3+-x\" are not lexed in the same way.\n>> > \n>> \n>> Completely agree with that. This differentiating behavior looks like a\nbug.\n>> \n>> > However, aside from arguing about whether the current behavior is good\n>> > or bad, these examples seem to indicate that it doesn't take an\ninfinite\n>> > amount of lookahead to reproduce the behavior. It looks to me like we\n>> > could preserve the current behavior by parsing a '-' as a separate\ntoken\n>> > if it *immediately* precedes a digit, and otherwise allowing it to be\n>> > folded into the preceding operator. That could presumably be done\n>> > without VLTC.\n>> \n>> Ok. If we *have* to preserve old weird behavior, here is the patch.\n>> It is to be applied over all my other patches. Though if I were to\n>> decide whether to restore old behavior, I wouldn't do it. Because it\n>> is inconsistency in grammar, i.e. a bug.\n>> \nIf a construct is ambiguous, then the behaviour should be undefined (i.e.:\nwe can do what we like, within reason). If the user wants something\npredictable, then she should use brackets ;-)\n\nIf 3+-2 presents an ambiguity (which it does) then make sure that you do\nthis: 3+(-2). If you have an operator +- then you should do this (3)+-(2).\nHowever, if you have 3+-2 without brackets, then, because this is ambiguous\n(assuming no +- operator), this is undefined, and we can do pretty much\nwhatever we feel like with it. Unless there is an operator +- defined,\nbecause then the behaviour is no longer ambiguous. The longest possible\nidentifier is always matched, and this means that the +- will be identified.\n\nEspecially with the unary minus, my feeling is that it should be placed in\nbrackets if correct behaviour is desired.\n\nMikeA\n\n",
"msg_date": "Thu, 2 Sep 1999 14:58:31 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer"
},
{
"msg_contents": "Ansley, Michael wrote:\n\n> If a construct is ambiguous, then the behaviour should be undefined (i.e.:\n> we can do what we like, within reason). If the user wants something\n> predictable, then she should use brackets ;-)\n> \n> If 3+-2 presents an ambiguity (which it does) then make sure that you do\n> this: 3+(-2). If you have an operator +- then you should do this (3)+-(2).\n> However, if you have 3+-2 without brackets, then, because this is ambiguous\n> (assuming no +- operator), this is undefined, and we can do pretty much\n> whatever we feel like with it. Unless there is an operator +- defined,\n> because then the behaviour is no longer ambiguous. The longest possible\n> identifier is always matched, and this means that the +- will be identified.\n> \n> Especially with the unary minus, my feeling is that it should be placed in\n> brackets if correct behaviour is desired.\n\nWhen I first read that, I thought \"can sign every word of that\".\nBut suddenly realized that there are more buggy situations here:\nconsider a>-2. It is parsed as (a) >- (2). Even in original \nThomas Lockhart's version there is a bug: it parses a>-b as (a) >- (b).\nSo I decided to simply forbid long operators to end with minus. If you\nthink that it is right, here is the patch (today is my patch bomb\nday :). It is to be applied *instead* my earlier today's patch.\n\nSeems that it is the only more or less clean way to deal with \nbig operator/unary minus conflict.\n-- \nLeon.",
"msg_date": "Thu, 02 Sep 1999 19:38:43 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer"
},
{
"msg_contents": "Leon <[email protected]> writes:\n> So I decided to simply forbid long operators to end with minus.\n\nNo good: we already have some. There are three standard geometric\noperators named \"?-\" ... not to mention lord-knows-what user-defined\noperators out in the field. This might have been a good solution if\nwe'd put it in on day one, but it's too late.\n\nI still like just telling people to write \"a > -2\". They don't expect\n\"ab\" to mean the same thing as \"a b\", nor \"24\" to be the same as \"2 4\",\nso why should \">-\" necessarily mean the same as \"> -\" ?\n\nIt would also be worth remembering that \"-\" is far from the only unary\noperator name we have, and so a solution that creates special behavior\njust for \"-\" is really no solution at all. Making a special case for\n\"-\" just increases the potential for confusion, not decreases it, IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 11:22:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer "
},
{
"msg_contents": "Tom Lane wrote:\n\n> It would also be worth remembering that \"-\" is far from the only unary\n> operator name we have, and so a solution that creates special behavior\n> just for \"-\" is really no solution at all. Making a special case for\n> \"-\" just increases the potential for confusion, not decreases it, IMHO.\n\nOk. Especially if there are more unary operators (I always wondered\nwhat unary % in gram.y stands for :) it is reasonable not to make\na special case of uminus and slightly change the old behavior. That\nis even more convincing that constructs like 3+-2 and 3+-b were \nparsed in different way, and, what is worse, a>-2 and a>-b also\nparsed differently. So let us ask the (hopefully) last question:\nThomas (Lockhart), do you agree on always parsing constructs like\n'+-' or '>-' as is, and not as '+' '-' or '>' '-' ?\n\n-- \nLeon.\n\n",
"msg_date": "Thu, 02 Sep 1999 21:05:47 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer"
},
{
"msg_contents": "> When I first read that, I thought \"can sign every word of that\".\n> But suddenly realized that there are more buggy situations here:\n> consider a>-2. It is parsed as (a) >- (2). Even in original\n> Thomas Lockhart's version there is a bug: it parses a>-b as (a) >- (b).\n\nBugs can be fixed. We don't always need to perform radical surgery.\n\n> So I decided to simply forbid long operators to end with minus. If you\n> think that it is right, here is the patch (today is my patch bomb\n> day :). It is to be applied *instead* my earlier today's patch.\n> Seems that it is the only more or less clean way to deal with\n> big operator/unary minus conflict.\n\nThat would disallow an existing built-in operator (\"?-\", the \"is\nhorizontal?\" test; of course, that one's my fault too ;).\n\nThis is a great conversation, because at the end of it we are going to\nhave a more solid parser. But I would suggest that we do at least two\nthings:\n\n1) generate a *complete* list of test cases. I'll include them in the\nregression tests to make sure that we preserve capabilities when\nchanges are made in the future. This should include cases which we\nthink *should* change behavior later.\n\n2) move slowly on patching the parser for this, since we clearly have\nincomplete coverage in our regression tests and since we aren't\nperfectly predicting the ramifications yet.\n\nMy recollection is that my last patches for the lexer stemmed from\ntrying to fix unary minus behavior for constants used as arguments to\nDDL statements like CREATE SEQUENCE/START, but as I did that I started\nseeing other cases which weren't handled correctly. I fixed, to my\nunderstanding of what desirable behavior should be, the cases which\ninvolved numeric constants. imho this same consideration should be\ngiven to other expressions just as you are doing now.\n\nThe overall parser behavior should meet some criteria, such as (in\ndecreasing priority):\n\no don't produce non-intuitive or unexpected results\no fully expose underlying capabilities of the backend\no try to do the right thing in common cases\no try to do the right thing in unusual cases\n\nI'll make the (perhaps incorrect) claim that the current behavior is\nabout right for numeric constants (common cases involving various\nwhitespace possibilities work about right once everything is through\nthe parser). (The \"+-\" operator is a good unusual case to focus on,\nand we may conclude that it isn't done right at the moment.) Where\nthings happen in the parser can change. If the current behavior can't\nbe reconciled with improved behaviors with other non-constant\nexpressions, then maybe it should be sacrificed, but not until we try\nhard to improve it, rather than disallow it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 02 Sep 1999 16:13:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> I'll make the (perhaps incorrect) claim that the current behavior is\n> about right for numeric constants (common cases involving various\n> whitespace possibilities work about right once everything is through\n> the parser). (The \"+-\" operator is a good unusual case to focus on,\n> and we may conclude that it isn't done right at the moment.) Where\n> things happen in the parser can change. If the current behavior can't\n> be reconciled with improved behaviors with other non-constant\n> expressions, then maybe it should be sacrificed, but not until we try\n> hard to improve it, rather than disallow it...\n\nSuppose you parse a***-b (where *** are any operator-like symbols)\nas (a) *** - (b). Hence you parse a?-b as (a) ? - (b). No good.\nSolution? No clean solution within horizon - must then have hardwired\nlist of operators somwhere in pasrer. If we could dream of changing\n?- operator ... ;) But we can't. Even your model of system which\nsticks uminus to number isn't fit for type-extension system. Imagine\nthere is crazy user some place out there who wants to define operator\nlike +- or #- . It doesn't seem to be senseless - if Postgres itself\nhas ?- operator, it then could live with my homegrown %- operator!\nAnd then suppose that the second argument to that operator is number.\nSee the pitfall? \n\nThe only possible thing seems to be to state in documentation that we \nhave a peculiar type-extension system which is biased towards long\noperators - when it sees long operator string, it swallows it as a whole.\nThus - users, use spaces where needed! This is the way to introduce\ntype-extension ideology throughout the system from parser onwards.\nThis ideology could be the guiding light in parser matters \n(there is now lack thereof).\n\n-- \nLeon.\n\n",
"msg_date": "Thu, 02 Sep 1999 23:09:12 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer"
}
] |
[
{
"msg_contents": "Your database url is wrong. A postgres database needs the following URL:\njdbc:postgresql://theComputerName/thePostgresDatabaseName\nif you omit something, you get your NullPointerException\n\nChristian\n\nEA wrote:\n> \n> Here are the errors I get while running the example classes provided by\n> Postgres:\n> \n> 1. Without Debug\n> <---------------------------------------------------\n> PostgreSQL basic test v6.3 rev 1\n> \n> Connecting to Database URL = jdbc:postgresql:db1\n> Exception caught.\n> Something unusual has occured to cause the driver to fail. Please report\n> this exception: {1}\n> Something unusual has occured to cause the driver to fail. Please report\n> this exception: {1}\n> at postgresql.Driver.connect(Compiled Code)\n> at java.sql.DriverManager.getConnection(Compiled Code)\n> at java.sql.DriverManager.getConnection(Compiled Code)\n> at example.basic.<init>(Compiled Code)\n> at example.basic.main(Compiled Code)\n> <-----------------------------------------------------\n> 2. With Debug\n> <---------------------------------------------------\n> PostgreSQL basic test v6.3 rev 1\n> \n> DriverManager.initialize: jdbc.drivers = null\n> JDBC DriverManager initialized\n> registerDriver:\n> driver[className=postgresql.Driver,postgresql.Driver@a574f633]\n> Connecting to Database URL = jdbc:postgresql:web\n> DriverManager.getConnection(\"jdbc:postgresql:db1\")\n> trying driver[className=postgresql.Driver,postgresql.Driver@a574f633]\n> -- listing properties --\n> password=start123\n> user=db1\n> PGDBNAME=db1\n> Protocol=postgresql\n> Using postgresql.jdbc2.Connection\n> Exception caught.\n> java.lang.NullPointerException\n> java.lang.NullPointerException\n> at java.io.Writer.write(Compiled Code)\n> at java.io.PrintStream.write(Compiled Code)\n> at java.io.PrintStream.print(Compiled Code)\n> at java.io.PrintStream.println(Compiled Code)\n> at java.lang.Throwable.printStackTrace(Compiled Code)\n> at java.sql.SQLException.<init>(Compiled Code)\n> at postgresql.util.PSQLException.<init>(Compiled Code)\n> at postgresql.Driver.connect(Compiled Code)\n> at java.sql.DriverManager.getConnection(Compiled Code)\n> at java.sql.DriverManager.getConnection(Compiled Code)\n> at example.basic.<init>(Compiled Code)\n> at example.basic.main(Compiled Code)\n> \n> <-----------------------------------------------------\n> \n> Anyone recognize these errors.",
"msg_date": "Thu, 02 Sep 1999 14:08:28 +0100",
"msg_from": "Christian Denning <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/Postgres 6.5 problems using jdbc w/jdk1.2"
},
{
"msg_contents": "I am having the same problem. I tried the new URL. The problem still\ncontinues. Is there a chance I still need to do a make to create the\nposgresql.jar file. I have installed the postgresql rpm's for\nposdtgres6.5/. Any Ideas on how to do the make in postgresql 6.5\n\nEdnut\n\n\n* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *\nThe fastest and easiest way to search and participate in Usenet - Free!\n\n",
"msg_date": "Thu, 28 Oct 1999 02:20:42 -0700",
"msg_from": "ednut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/Postgres 6.5 problems using jdbc w/jdk1.2"
},
{
"msg_contents": "I am having the same problem. I tried the new URL. The problem still\ncontinues. Is there a chance I still need to do a make to create the\nposgresql.jar file. I have installed the postgresql rpm's for\nposdtgres6.5/. Any Ideas on how to do the make in postgresql 6.5\n\nEdnut\nIn article <[email protected]>, Christian Denning\n<[email protected]> wrote:\n> This is a multi-part message in MIME format.\n> --------------E9628914BE19B20C6DA63E7C\n> Content-Type: text/plain; charset=us-ascii\n> Content-Transfer-Encoding: 7bit\n> Your database url is wrong. A postgres database needs the\n> following URL:\n> jdbc:postgresql://theComputerName/thePostgresDatabaseName\n> if you omit something, you get your NullPointerException\n> Christian\n> EA wrote:\n> >\n> > Here are the errors I get while running the example classes\n> provided by\n> > Postgres:\n> >\n> > 1. Without Debug\n> > <---------------------------------------------------\n> > PostgreSQL basic test v6.3 rev 1\n> >\n> > Connecting to Database URL = jdbc:postgresql:db1\n> > Exception caught.\n> > Something unusual has occured to cause the driver to fail.\n> Please report\n> > this exception: {1}\n> > Something unusual has occured to cause the driver to fail.\n> Please report\n> > this exception: {1}\n> > at postgresql.Driver.connect(Compiled Code)\n> > at java.sql.DriverManager.getConnection(Compiled Code)\n> > at java.sql.DriverManager.getConnection(Compiled Code)\n> > at example.basic.<init>(Compiled Code)\n> > at example.basic.main(Compiled Code)\n> > <-----------------------------------------------------\n> > 2. With Debug\n> > <---------------------------------------------------\n> > PostgreSQL basic test v6.3 rev 1\n> >\n> > DriverManager.initialize: jdbc.drivers = null\n> > JDBC DriverManager initialized\n> > registerDriver:\n> > driver[className=postgresql.Driver,postgresql.Driver@a574f633]\n> > Connecting to Database URL = jdbc:postgresql:web\n> > DriverManager.getConnection(\"jdbc:postgresql:db1\")\n> > trying\n> driver[className=postgresql.Driver,postgresql.Driver@a574f633]\n> > -- listing properties --\n> > password=start123\n> > user=db1\n> > PGDBNAME=db1\n> > Protocol=postgresql\n> > Using postgresql.jdbc2.Connection\n> > Exception caught.\n> > java.lang.NullPointerException\n> > java.lang.NullPointerException\n> > at java.io.Writer.write(Compiled Code)\n> > at java.io.PrintStream.write(Compiled Code)\n> > at java.io.PrintStream.print(Compiled Code)\n> > at java.io.PrintStream.println(Compiled Code)\n> > at java.lang.Throwable.printStackTrace(Compiled Code)\n> > at java.sql.SQLException.<init>(Compiled Code)\n> > at postgresql.util.PSQLException.<init>(Compiled Code)\n> > at postgresql.Driver.connect(Compiled Code)\n> > at java.sql.DriverManager.getConnection(Compiled Code)\n> > at java.sql.DriverManager.getConnection(Compiled Code)\n> > at example.basic.<init>(Compiled Code)\n> > at example.basic.main(Compiled Code)\n> >\n> > <-----------------------------------------------------\n> >\n> > Anyone recognize these errors.\n> --------------E9628914BE19B20C6DA63E7C\n> Content-Type: text/x-vcard; charset=us-ascii;\n> name=\"ChristianDenning.vcf\"\n> Content-Transfer-Encoding: 7bit\n> Content-Description: Card for Christian Denning\n> Content-Disposition: attachment;\n> filename=\"ChristianDenning.vcf\"\n> begin:vcard\n> n:Denning;Christian\n> tel;cell:+44-7970-008855\n> tel;fax:+44-1225-484944\n> tel;home:+44-1453-836652\n> tel;work:+44-1225-484449\n> x-mozilla-html:FALSE\n> url:http://iecl.iuscomp.org/cd\n> org:e-net Software\n> version:2.1\n> email;internet:[email protected]\n> title:Software Development Manager\n> adr;quoted-printable:;;Shear's\n> Cottage=0D=0AWatledge;Nailsworth;Gloucestershire;GL6 0AR;United\n> Kingdom\n> x-mozilla-cpt:;-26992\n> fn:Christian Denning\n> end:vcard\n> --------------E9628914BE19B20C6DA63E7C--\n\n\n\n* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *\nThe fastest and easiest way to search and participate in Usenet - Free!\n\n",
"msg_date": "Thu, 28 Oct 1999 02:20:57 -0700",
"msg_from": "ednut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/Postgres 6.5 problems using jdbc w/jdk1.2"
}
] |
[
{
"msg_contents": "\nMorning all...\n\n\tAm looking at a v6.4 system, and if I do:\n\nselect relname from pg_class;\n\n\tIt returns all the relations...but if I do:\n\nselect relname,relacl from pg_class;\n\n\tIt gives me:\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request. We have lost the connection to the backend,\nso further processing is impossible. Terminating.\n\n\tStill investigating, but if anyone has any suggestions, I'm all\nears...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Sep 1999 10:49:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd problem with pg_class ..."
},
{
"msg_contents": "\nOkay, figured it out...\n\nThe problem exists in v6.5.x as well.\n\nBasically, the user had, it seems, accidentally deleted various groups\nfrom pg_group, which he had used to GRANT group permissions on various\ntables, causing an error message of:\n\nNOTICE: get_groname: group 185 not found\n\nto be printed to his errlog.\n\nIn v6.5.x, you at least get something out through psql when you do this,\nbut should we get:\n\n==========================\n | status | {\"=\",\"group keystone=arwR\"} |\n +------------------+-----------------------------+\npgsql_keystone=> delete from pg_group where groname='keystone';\nDELETE 1\npgsql_keystone=> \\z\nNOTICE: get_groname: group 0 not found\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n===========================\n\nDoesn't sound very \"friendly\"...\n\n\nOn Thu, 2 Sep 1999, The Hermit Hacker wrote:\n\n> \n> Morning all...\n> \n> \tAm looking at a v6.4 system, and if I do:\n> \n> select relname from pg_class;\n> \n> \tIt returns all the relations...but if I do:\n> \n> select relname,relacl from pg_class;\n> \n> \tIt gives me:\n> \n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request. We have lost the connection to the backend,\n> so further processing is impossible. Terminating.\n> \n> \tStill investigating, but if anyone has any suggestions, I'm all\n> ears...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 2 Sep 1999 11:13:40 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Odd problem with pg_class ..."
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \tAm looking at a v6.4 system, and if I do:\n> select relname from pg_class;\n> \tIt returns all the relations...but if I do:\n> select relname,relacl from pg_class;\n> \tIt gives me:\n> pqReadData() -- backend closed the channel unexpectedly.\n\nI do not see this on my 6.4 setup. Possibly you have inconsistent\nACL data in your database --- like the example someone saw recently\nwhere deleting a group name that was still referenced by an ACL\nwould make ACL display crash. (I think this got fixed post-6.4...\nor maybe it's still an outstanding bug?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 10:55:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Odd problem with pg_class ... "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> NOTICE: get_groname: group 0 not found\n> pqReadData() -- backend closed the channel unexpectedly.\n\nget_groname returns NULL on failure, and it looks like aclitemout\nin backend/utils/adt/acl.c isn't checking for that. Probably\naclitemout ought to produce the decimal equivalent of the group ID\nif no name is available. Compare what it does in the UID case just\nabove.\n\nBTW, the ifdef'd out elog(NOTICE) in the UID case could be re-enabled\nnow, because I fixed the FE/BE protocol problem with NOTICEs generated\nby type conversion routines...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 11:03:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Odd problem with pg_class ... "
},
{
"msg_contents": "This problem was reported in 08/17/1999 by me and 08/30/1999 by D Herssein.\nNo answer received.\nBoth are attached.\n---------------- E-mail\n08/17 -----------------------------------------------------------\nHi All,\n\nTwo weeks ago somebody had reported that drop user don't remove rights from\nrelacl field of pg_class. This problem is more serious if you delete a group\nfrom pg_group without remoking rigths before. It causes backend terminates\nabnormally.\n\nMaybe interesting for others!! Could anybody include DENY sql command in\nTODO list.\n\nMy problem is: A group have rigths to access some table. I include a new\nuser in this group, but for three months he will not have rights to access\nthis table. So, if the new user have no rigths, he will get rights from his\ngroup. I think it would be enough DENY command (deny all on sometable from\nnewuser) includes something like \"NEWUSER=\" in relacl field.\n\nJust more one question: Aclitem type have the following rigths: =arwR\n(insert, select, update/delete, create rule, I suppose).\nHow could I grant update and revoke delete permissions on a table ?\n\nBest Regards,\n\nRicardo Coelho.\n----------------------------------------------------------------------------\n---------------------\n-------------- E-mail\n08/30 -------------------------------------------------------------------\nHi Denny,\n\nI solved this problem (backend crashes when we delete a group without\nrevoking privileges) adding the group again with the same grosysid, revoking\nall privileges on all tables and deleting this group.\n\nBest Regards,\n\nRicardo Coelho.\n\n----- Original Message -----\nFrom: D Herssein <[email protected]>\nTo: Ricardo Coelho <[email protected]>\nSent: Monday, August 30, 1999 1:03 PM\nSubject: HELP Re: pg_group, etc..\n\n\n> I just read your post AFTER I sent the HELP request to the group.\n> I must have deleted the group/user in the wrong order while playing with\n> the db trying to learn how to gran group access to users.\n> How do I get myself back to normal?\n>\n>\n> --\n> Life is complicated. But the simpler alternatives are not very\n> desirable. (R' A. Kahn)\n>\n\n----------------------------------------------------------------------------\n-------------------\n\n----- Original Message -----\nFrom: Tom Lane <[email protected]>\nTo: The Hermit Hacker <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 02, 1999 12:03 PM\nSubject: Re: [HACKERS] Odd problem with pg_class ...\n\n\n> The Hermit Hacker <[email protected]> writes:\n> > NOTICE: get_groname: group 0 not found\n> > pqReadData() -- backend closed the channel unexpectedly.\n>\n> get_groname returns NULL on failure, and it looks like aclitemout\n> in backend/utils/adt/acl.c isn't checking for that. Probably\n> aclitemout ought to produce the decimal equivalent of the group ID\n> if no name is available. Compare what it does in the UID case just\n> above.\n>\n> BTW, the ifdef'd out elog(NOTICE) in the UID case could be re-enabled\n> now, because I fixed the FE/BE protocol problem with NOTICEs generated\n> by type conversion routines...\n>\n> regards, tom lane\n>\n> ************\n>\n>\n\n",
"msg_date": "Thu, 2 Sep 1999 13:35:03 -0300",
"msg_from": "\"Ricardo Coelho\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Odd problem with pg_class ... "
}
] |
[
{
"msg_contents": "Hi, Ive spent the last 4 days working my butt off trying to find the cause of\nthe seemingly random vacuum analyze crash. Actually Ive been just\ntrying to reproduce it, cos as soon as I added in -ggdb into the\ncompile rules it stopped happening *grrr* (not that Im surprised. It\nwas random at best before, and things like this always hide when you\ntry and look for them).\n\nBut after 4 days of frustration, I just want to be sure - nobody else\nhas found the problem and solved it have they? I just dont want to\nwaste my time on this if someone else has found the cause...\n\nThanx\n\n\t\t\t\t\tM Simms\n",
"msg_date": "Thu, 2 Sep 1999 15:19:17 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum analyze"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> But after 4 days of frustration, I just want to be sure - nobody else\n> has found the problem and solved it have they? I just dont want to\n> waste my time on this if someone else has found the cause...\n\nLet's see ... I know that removing pg_vlock while vacuum is running\nwill lead to a coredump after vacuum finishes (it doesn't recover\ncleanly after its attempt to unlink pg_vlock fails). I think I know\nhow to fix that but it's not done yet. The same problem could affect\nany error that is detected between vacuum's internal transactions.\nDo you get any error reports in the postmaster log when there is a\ncrash?\n\nBeyond that, I don't recall having heard of any recent fixes that affect\nvacuum.\n\nIf you can create a reproducible example then more people could poke\nat it, so that seems like the avenue to focus on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 11:33:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum analyze "
},
{
"msg_contents": "> \n> Michael Simms <[email protected]> writes:\n> > But after 4 days of frustration, I just want to be sure - nobody else\n> > has found the problem and solved it have they? I just dont want to\n> > waste my time on this if someone else has found the cause...\n> \n> Let's see ... I know that removing pg_vlock while vacuum is running\n> will lead to a coredump after vacuum finishes (it doesn't recover\n> cleanly after its attempt to unlink pg_vlock fails). I think I know\n> how to fix that but it's not done yet. The same problem could affect\n> any error that is detected between vacuum's internal transactions.\n> Do you get any error reports in the postmaster log when there is a\n> crash?\n\nahem, well, to be honest, Ive never found any documentation on how to\nread the logs *embarrassed smile*.\n\ntemplate1=> select * from pg_log;\nERROR: pg_log cannot be accessed by users\n\nThat happens with any account.\n\nIt COULD be a problem with that, as I have a crontab process that vacuums\neverything every 24 hours, but also I perform some minor vacuums in the\nmeantime, some of which may occur when the main vacuum is happening. I didnt\nnotice that as a pattern, but it certainly COULD be that. I'll check into it.\n\n> Beyond that, I don't recall having heard of any recent fixes that affect\n> vacuum.\n> \n> If you can create a reproducible example then more people could poke\n> at it, so that seems like the avenue to focus on.\n\nYup, well, if I could get it to happen *at all* any more, I could poke around,\nas I am running the backend that is handling the vacuum under gdb. If I\nfind a reproducable way I will certainly report it here.\n\nThanx\n\n\t\t\t\t\t\tM Simms\n",
"msg_date": "Thu, 2 Sep 1999 20:40:54 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] vacuum analyze"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> ahem, well, to be honest, Ive never found any documentation on how to\n> read the logs *embarrassed smile*.\n\n> template1=> select * from pg_log;\n> ERROR: pg_log cannot be accessed by users\n\nNo, no, not pg_log. I'm talking about the text file that you've\ndirected the postmaster's stdout and stderr into. (You are doing that\nand not dropping it on the floor, I trust.)\n\n> It COULD be a problem with that, as I have a crontab process that vacuums\n> everything every 24 hours, but also I perform some minor vacuums in the\n> meantime, some of which may occur when the main vacuum is happening.\n\npg_vlock exists specifically to prevent two concurrent vacuums. The\nscenario I was talking about involved removing it by hand, which you\nwouldn't do unless you were trying to provoke a vacuum error (or,\nperhaps, cleaning up after a previous vacuum run coredumped).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 1999 17:05:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum analyze "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Let's see ... I know that removing pg_vlock while vacuum is running\n> will lead to a coredump after vacuum finishes (it doesn't recover\n> cleanly after its attempt to unlink pg_vlock fails). I think I know\n> how to fix that but it's not done yet. The same problem could affect\n> any error that is detected between vacuum's internal transactions.\n> Do you get any error reports in the postmaster log when there is a\n> crash?\n>\n> Beyond that, I don't recall having heard of any recent fixes that affect\n> vacuum.\n>\n> If you can create a reproducible example then more people could poke\n> at it, so that seems like the avenue to focus on.\n>\n> regards, tom lane\n\nPerhaps the bug I reported on pgsql-bugs about a week ago has some relation\nto this problem:\nI had been able to reproducibly (?) crash postmaster with my example\nprogram (a loop of\nupdate table) combined with several vacuum commands in a seperate task.\nAs the sice of the table's index grows a failure almost gets certain.\n\nIf you think the program might help you, contact me or look into bugs'\narchives.\n\nRegards\n Christof\n\n\n\n",
"msg_date": "Wed, 08 Sep 1999 13:16:59 +0200",
"msg_from": "Christof Petig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] vacuum analyze"
}
] |
[
{
"msg_contents": "Sorry, guys. Here is the ultimate patch which keeps the entire\nbehavior as it was, apart from forbidding minus-terminated \noperators. Seems that I have to break the habit of doing before\nthinking properly :-/ The point is that my second patch breaks\nconstructs like a & b or a ! b. This patch is to be applied \ninstead of any of two other today's patches.\n\n-- \nLeon.",
"msg_date": "Thu, 02 Sep 1999 20:38:20 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lexer again."
},
{
"msg_contents": "> Sorry, guys. Here is the ultimate patch which keeps the entire\n> behavior as it was, apart from forbidding minus-terminated \n> operators. Seems that I have to break the habit of doing before\n> thinking properly :-/ The point is that my second patch breaks\n> constructs like a & b or a ! b. This patch is to be applied \n> instead of any of two other today's patches.\n> \n\nApplied.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 17:02:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lexer again."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Sorry, guys. Here is the ultimate patch which keeps the entire\n> > behavior as it was, apart from forbidding minus-terminated\n> > operators. Seems that I have to break the habit of doing before\n> > thinking properly :-/ The point is that my second patch breaks\n> > constructs like a & b or a ! b. This patch is to be applied\n> > instead of any of two other today's patches.\n> >\n> \n> Applied.\n\nHey! Later discussion on that matter made us think that minus-terminated\noperators have to be preserved, especially considering\nthat there is already one such operator here: geometric ?-.\nI'm terribly sorry, but that patch shouldn't be applied. \n\nThe other patch applied by you which exterminates uminus exclusive\nstate in parser is considered to be ok.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Tue, 28 Sep 1999 08:17:14 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lexer again."
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Sorry, guys. Here is the ultimate patch which keeps the entire\n> > > behavior as it was, apart from forbidding minus-terminated\n> > > operators. Seems that I have to break the habit of doing before\n> > > thinking properly :-/ The point is that my second patch breaks\n> > > constructs like a & b or a ! b. This patch is to be applied\n> > > instead of any of two other today's patches.\n> > >\n> > \n> > Applied.\n> \n> Hey! Later discussion on that matter made us think that minus-terminated\n> operators have to be preserved, especially considering\n> that there is already one such operator here: geometric ?-.\n> I'm terribly sorry, but that patch shouldn't be applied. \n> \n> The other patch applied by you which exterminates uminus exclusive\n> state in parser is considered to be ok.\n\nReversed out.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 Sep 1999 23:39:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Lexer again."
}
] |
[
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Bruce,\n> \n> The replacement of the existing client/server communication project with\n> CORBA looks very interesting, I would love to get involved with something\n> like that. Is there anyone working on it at the moment? What area of it\n> would you like me to look into, any ideas of how I could turn a project like\n> that into a good Thesis? If you can give me some pointers I'll go and speak\n> to my tutor about it all.\n\n\n[CC'ing to hackers for comments.]\n\nWell, one idea is to create a server that listens on a certain port for\nCORBA requests, sends them to a backend for processing, and returns the\nresult.\n\nThe other idea is to replace our current communication system that uses\nsingle-character flags and data with a corba model. See developers\ndocumentation for deals on that.\n\nI think the first on is clearly good, the second may suffer from\nperformance problems, or it may not be worth changing all our interfaces\nto handle a new protocol.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Sep 1999 22:31:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: University Masters Project"
},
{
"msg_contents": "On Thu, 2 Sep 1999, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Bruce,\n> > \n> > The replacement of the existing client/server communication project with\n> > CORBA looks very interesting, I would love to get involved with something\n> > like that. Is there anyone working on it at the moment? What area of it\n> > would you like me to look into, any ideas of how I could turn a project like\n> > that into a good Thesis? If you can give me some pointers I'll go and speak\n> > to my tutor about it all.\n> \n> \n> [CC'ing to hackers for comments.]\n> \n> Well, one idea is to create a server that listens on a certain port for\n> CORBA requests, sends them to a backend for processing, and returns the\n> result.\n> \n> The other idea is to replace our current communication system that uses\n> single-character flags and data with a corba model. See developers\n> documentation for deals on that.\n> \n> I think the first on is clearly good, the second may suffer from\n> performance problems, or it may not be worth changing all our interfaces\n> to handle a new protocol.\n\nI'm curious as to whether there is a way of testing that without too much\ntrouble? Even the investigation of *that* might make for the thesis in\nitself? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 3 Sep 1999 08:19:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: University Masters Project"
},
{
"msg_contents": "Hello,\n\nI am using Postgres extensively for a number of projects. I am\nextremely happy with its performance and flexibility. I am trying to\noptimize the system, currently I run the postmaster with the following\nsetting: \n\tpostmaster -i -B 2048 -o '-S 2048'\n\nI have a couple of large(?) tables which I would like to keep them in\nmemory (cached) so that searches are performed as fast as possible.\n\nIs it possible to 'pin' the tables and it's indexes in memory? \nAre there any other options/values which would yield better performance?\n\nThanks,\n-Edwin S. Ramirez-\n",
"msg_date": "Fri, 03 Sep 1999 11:42:22 -0400",
"msg_from": "Edwin Ramirez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres Performance"
},
{
"msg_contents": "> I have a couple of large(?) tables which I would like to keep them in\n> memory (cached) so that searches are performed as fast as possible.\n> Is it possible to 'pin' the tables and it's indexes in memory?\n\nNot explicitly. We rely on the OS to do that.\n\n> Are there any other options/values which would yield better performance?\n\nBy default, the backend \"fsyncs\" for every query. You can disable\nthis, which would then allow the tables to hang around in memory until\nthe OS decides to flush to disk. Not everyone should do this, since\nthere is a (small) risk that if your computer crashes after some\nupdates but before things are flushed then the db might become\ninconsistant. afaik we have never had an unambiguous report that this\nhas actually happened (but others might remember differently). There\nis already that risk to some extent, but instead of the window being\nO(1sec) it becomes O(30sec).\n\nRun the backend by adding '-o -F' (or just '-F' to your existing list\nof \"-o\" options). \n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Sep 1999 16:08:49 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance"
},
{
"msg_contents": "Edwin Ramirez <[email protected]> writes:\n> I have a couple of large(?) tables which I would like to keep them in\n> memory (cached) so that searches are performed as fast as possible.\n> Is it possible to 'pin' the tables and it's indexes in memory? \n\nIf the tables are being touched often, then they will stay in buffer\ncache of their own accord. I doubt that pinning them would improve\nperformance --- if they do get swapped out it'd be because some other\ntable(s) need to be accessed now, and if you did have these tables\npinned you'd be taking a large hit in access performance for those other\ntables because of inadequate buffer space. LRU buffering policy really\nworks pretty well, so I don't think you need to worry about it.\n\n> currently I run the postmaster with the following setting: \n> \tpostmaster -i -B 2048 -o '-S 2048'\n> Are there any other options/values which would yield better performance?\n\nIf you have a reliable OS and power source, consider -o -F (no fsync).\nThis usually makes for a very substantial performance improvement, and\nit can only hurt if your machine goes down without having performed\nall the writes the kernel was told to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 12:09:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance "
},
{
"msg_contents": "Dear All,\n\nYes I agree with you that something like that might make a thesis in itself,\nand definitely sounds interesting.\n\nI really need to sit down and go through PostgreSQL so that I understand how\nit all works, so that I can ask questions without wasting everyone's time,\nas I'm sure a lot of the questions I currently have will be in the\ndocumentation. I start Uni in 4 weeks time, which by then I hope to have the\nbasics to PostgreSQL and its architecture, that along with guidance from my\ntutor should then give me a good base to start the project on.\n\nI'll keep you all informed of my progress with this over the next few weeks,\nand my University's response to my request to work on a project of this\nnature.\n\nWho should I direct my correspondance to, as I don't want to start filling\nup people's email box's with unessecary email.\n\nRegards\n\nMark Proctor\nBrunel University\nEmail : [email protected]\nICQ : 8106598\n\n-----Original Message-----\nFrom:\tThe Hermit Hacker [mailto:[email protected]]\nSent:\tFriday, September 03, 1999 12:19 PM\nTo:\tBruce Momjian\nCc:\[email protected]; PostgreSQL-development\nSubject:\tRe: [HACKERS] Re: University Masters Project\n\nOn Thu, 2 Sep 1999, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Bruce,\n> >\n> > The replacement of the existing client/server communication project with\n> > CORBA looks very interesting, I would love to get involved with\nsomething\n> > like that. Is there anyone working on it at the moment? What area of it\n> > would you like me to look into, any ideas of how I could turn a project\nlike\n> > that into a good Thesis? If you can give me some pointers I'll go and\nspeak\n> > to my tutor about it all.\n>\n>\n> [CC'ing to hackers for comments.]\n>\n> Well, one idea is to create a server that listens on a certain port for\n> CORBA requests, sends them to a backend for processing, and returns the\n> result.\n>\n> The other idea is to replace our current communication system that uses\n> single-character flags and data with a corba model. See developers\n> documentation for deals on that.\n>\n> I think the first on is clearly good, the second may suffer from\n> performance problems, or it may not be worth changing all our interfaces\n> to handle a new protocol.\n\nI'm curious as to whether there is a way of testing that without too much\ntrouble? Even the investigation of *that* might make for the thesis in\nitself?\n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org\nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n\n\n",
"msg_date": "Fri, 3 Sep 1999 18:26:21 +0100",
"msg_from": "\"Mark Proctor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: University Masters Project"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> there is a (small) risk that if your computer crashes after some\n> updates but before things are flushed then the db might become\n> inconsistant. afaik we have never had an unambiguous report that this\n> has actually happened (but others might remember differently). There\n> is already that risk to some extent, but instead of the window being\n> O(1sec) it becomes O(30sec).\n\nI believe we use fsync not so much to reduce the time window where you\ncould lose a supposedly-committed update as to ensure that writes are\nperformed in a known order. With fsync enabled, the data-file pages\ntouched by an update query will hit the disk before the pg_log entry\nsaying the transaction is committed hits the disk. If you crash\nsomewhere during that sequence, the transaction appears uncommitted\nand there is no loss of consistency. (We assume here that writing\na single page to disk is an atomic operation, which is only sort-of\ntrue, but it's the best we can do atop a Unix kernel. Other than that,\nthere is no \"window\" for possible inconsistency.)\n\nWithout fsync, the kernel writes the pages to disk in whatever order\nit finds convenient, so following a crash there might be a pg_log entry\nsaying transaction N was committed, when in fact only some of\ntransaction N's tuples made it to disk. Then you see an inconsistent\ndatabase: some of the transaction's updates are there, some are not.\nThis might be relatively harmless, or deadly, depending on your\napplication logic and just what the missing updates are.\n\nAnother risk without fsync is that a client application might have been\ntold that the transaction was committed, when in fact it gets lost due to\na crash moments later before pg_log gets physically updated. Again, the\npossible consequences would depend on your application.\n\nThe total number of writes performed without fsync is usually way less\nthan with, since we tend to write certain pages (esp. pg_log) over and\nover --- the kernel will reduce that to one physical disk write every\nsync interval (~ 30sec) unless we force its hand with fsync. That's\nwhere most of the performance improvement comes from.\n\nIf you have a reliable kernel and reliable hardware/power supply, then\nyou might as well turn off fsync. A crash in Postgres itself would\nnot cause a problem --- the writes are out there in the kernel's disk\nbuffers, and the only issue is do you trust the platform to get the\ndata onto stable storage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 13:43:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance "
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Dear All,\n> \n> Yes I agree with you that something like that might make a thesis in itself,\n> and definitely sounds interesting.\n> \n> I really need to sit down and go through PostgreSQL so that I understand how\n> it all works, so that I can ask questions without wasting everyone's time,\n> as I'm sure a lot of the questions I currently have will be in the\n> documentation. I start Uni in 4 weeks time, which by then I hope to have the\n> basics to PostgreSQL and its architecture, that along with guidance from my\n> tutor should then give me a good base to start the project on.\n> \n> I'll keep you all informed of my progress with this over the next few weeks,\n> and my University's response to my request to work on a project of this\n> nature.\n> \n> Who should I direct my correspondance to, as I don't want to start filling\n> up people's email box's with unessecary email.\n\nHackers list is fine.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Sep 1999 13:44:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: University Masters Project"
},
{
"msg_contents": "If I do a large search the first time is about three times slower than\nany subsequent overlapping (same data) searches. I would like to always\nget the higher performance. \n\nHow are the buffers that I specify to the postmaster used?\nWill increasing this number improve things?\n\nThe issue that I am encountering is that no matter how much memory I\nhave on a computer, the performance is not improving. I am willing to\nfund a project to implement a postgres specific, user configurable\ncache.\n\nAny ideas?\n-Edwin S. Ramirez-\n\nTom Lane wrote:\n> \n> Edwin Ramirez <[email protected]> writes:\n> > I have a couple of large(?) tables which I would like to keep them in\n> > memory (cached) so that searches are performed as fast as possible.\n> > Is it possible to 'pin' the tables and it's indexes in memory?\n> \n> If the tables are being touched often, then they will stay in buffer\n> cache of their own accord. I doubt that pinning them would improve\n> performance --- if they do get swapped out it'd be because some other\n> table(s) need to be accessed now, and if you did have these tables\n> pinned you'd be taking a large hit in access performance for those other\n> tables because of inadequate buffer space. LRU buffering policy really\n> works pretty well, so I don't think you need to worry about it.\n> \n> > currently I run the postmaster with the following setting:\n> > postmaster -i -B 2048 -o '-S 2048'\n> > Are there any other options/values which would yield better performance?\n> \n> If you have a reliable OS and power source, consider -o -F (no fsync).\n> This usually makes for a very substantial performance improvement, and\n> it can only hurt if your machine goes down without having performed\n> all the writes the kernel was told to do.\n> \n> regards, tom lane\n> \n> ************\n",
"msg_date": "Wed, 08 Sep 1999 17:05:38 -0400",
"msg_from": "Edwin Ramirez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance"
},
{
"msg_contents": "> \n> If I do a large search the first time is about three times slower than\n> any subsequent overlapping (same data) searches. I would like to always\n> get the higher performance. \n> \n> How are the buffers that I specify to the postmaster used?\n> Will increasing this number improve things?\n> \n> The issue that I am encountering is that no matter how much memory I\n> have on a computer, the performance is not improving. I am willing to\n> fund a project to implement a postgres specific, user configurable\n> cache.\n> \n> Any ideas?\n> -Edwin S. Ramirez-\n\nI think that the fact you are seeing an improvement already shows a good level\nof caching.\n\nWhat happens the first time is that it must read the data off the disc. After\nthat the data comes from memory IF it is cached. Disc read will always be\nslower with current disc technology.\n\nI would imagine (Im not an expert, but through observation) that if you\ndrasticly increase the number of shared memory buffers, then when you\nstartup your front-end simply do a select * from the tables, it may even keep\nthem all in memory from the start.\n\n\t\t\t\t\t\tM Simms\n",
"msg_date": "Wed, 8 Sep 1999 22:41:04 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n>> If I do a large search the first time is about three times slower than\n>> any subsequent overlapping (same data) searches. I would like to always\n>> get the higher performance. \n\n> What happens the first time is that it must read the data off the disc. After\n> that the data comes from memory IF it is cached. Disc read will always be\n> slower with current disc technology.\n\nThere is that effect, but I suspect Edwin may also be seeing another\neffect. When a tuple is first inserted or modified, it is written into\nthe table with a marker saying (in effect) \"Inserted by transaction NNN,\nnot committed yet\". To find out whether the tuple is really any good,\nyou have to go and consult pg_log to see if that transaction got\ncommitted. Obviously, that's slow, so the first subsequent transaction\nthat does so and finds that NNN really did get committed will rewrite\nthe disk page with the tuple's state changed to \"Known committed\".\n\nSo, the first select after an update transaction will spend additional\ncycles checking pg_log and marking committed tuples. In effect, it's\ndoing the last phase of the update. We could instead force the update\nto do all its own housekeeping, but the overall result wouldn't be any\nfaster; probably it'd be slower.\n\n> I would imagine (Im not an expert, but through observation) that if\n> you drasticly increase the number of shared memory buffers, then when\n> you startup your front-end simply do a select * from the tables, it\n> may even keep them all in memory from the start.\n\nThe default buffer space (64 disk pages) is not very large --- use\na larger -B setting if you have the memory to spare.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 18:40:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance "
},
{
"msg_contents": "\n\tI believe that disk pages are 1k in linux systems, that would mean that\nI am allocating 3M when using \"postmaster -i -B 3096 -o -S 2048\" and 2M\nfor sorting. That is very low. \n\n\tHowever, some of the postgres processes have memory segments larger\nthan 3M (see bottom).\n\n> I would imagine (Im not an expert, but through observation) that if\n> you drasticly increase the number of shared memory buffers, then when\n> you startup your front-end simply do a select * from the tables, it\n> may even keep them all in memory from the start.\n\nThat's basically what I tried to do, but I am unable to specify a very\nlarge number (it complained when I tried -B > ~3900). Do these buffer\ncontain the actual table data?\nI understand that the OS is buffering the data read from disk, but\npostgres is competing with all the other processes on the system. I\nthink that if postgres had a dedicated (user configurable) cache, like\nOracle, then users could configure the system/postgres better.\n\n\n4:29pm up 83 days, 23:42, 5 users, load average: 0.00, 0.01, 0.00\n75 processes: 74 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: 0.1% user, 1.1% system, 0.0% nice, 98.7% idle\nMem: 128216K av, 98812K used, 29404K free, 67064K shrd, 18536K buff\nSwap: 80288K av, 22208K used, 58080K free 14924K\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME\nCOMMAND\n16633 postgres 0 0 26536 1384 1284 S 0 0.0 1.0 0:02\npostmaster\n18190 postgres 0 0 27708 3432 2720 S 0 0.0 2.6 0:00\npostmaster\n18303 postgres 0 0 27444 2728 2196 S 0 0.0 2.1 0:00\npostmaster\n18991 postgres 0 0 27472 2908 2392 S 0 0.0 2.2 0:00\npostmaster\n19154 postgres 0 0 27408 2644 2140 S 0 0.0 2.0 0:06\npostmaster\n19155 postgres 0 0 27428 2712 2188 S 0 0.0 2.1 0:00\npostmaster\n19157 postgres 0 0 27840 10M 10144 S 0 0.0 8.6 0:08\npostmaster\n19282 postgres 0 0 27560 3332 2732 S 0 0.0 2.5 0:11\npostmaster\n19335 postgres 0 0 27524 3112 2528 S 0 0.0 2.4 0:03\npostmaster\n19434 postgres 0 0 27416 2700 2192 S 0 0.0 2.1 0:00\npostmaster\n",
"msg_date": "Thu, 09 Sep 1999 16:52:25 -0400",
"msg_from": "Edwin Ramirez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance"
},
{
"msg_contents": "Edwin Ramirez <[email protected]> writes:\n> \tI believe that disk pages are 1k in linux systems, that would mean that\n> I am allocating 3M when using \"postmaster -i -B 3096 -o -S 2048\" and 2M\n> for sorting. That is very low. \n\nNo, buffers are 8K apiece (unless you've changed the BLCKSZ constant in\nconfig.h). So -B 3096 means 24 meg of buffer space. The -S number is\nindeed measured in kilobytes, however.\n\n> \tHowever, some of the postgres processes have memory segments larger\n> than 3M (see bottom).\n\n'top' does not show shared memory segments AFAIK, and the buffer area is\na shared memory segment. Try \"ipcs -m -a\" to see what's going on in\nshared memory.\n\n> That's basically what I tried to do, but I am unable to specify a very\n> large number (it complained when I tried -B > ~3900).\n\nYou're probably running into a configuration limit of your kernel ---\nat a guess, your kernel is configured not to give out shared memory\nsegments exceeding 32Mb.\n\n> I understand that the OS is buffering the data read from disk, but\n> postgres is competing with all the other processes on the system. I\n> think that if postgres had a dedicated (user configurable) cache, like\n> Oracle, then users could configure the system/postgres better.\n\nThe shared-buffer cache does serve that purpose...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 1999 11:01:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres Performance "
}
] |
[
{
"msg_contents": "> This inconsistency bothers me: I've always thought that char(),\n> varchar(), and text() are functionally interchangeable, but it seems\n> that's not so. Is this behavior mandated by SQL92?\n\nYes, the behavior is correct, and mandated by SQL92. \nA char would not be able to hold the information of how many trailing blanks\nare user data, since it fills the column with trailing blanks.\n\nAndreas\n",
"msg_date": "Fri, 03 Sep 1999 09:13:52 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SELECT BUG "
}
] |
[
{
"msg_contents": ">> \n>> No good: we already have some. There are three standard geometric\n>> operators named \"?-\" ... not to mention lord-knows-what user-defined\n\t<snip>\n>> It would also be worth remembering that \"-\" is far from the \n>> only unary\n>> operator name we have, and so a solution that creates \n>> special behavior\n>> just for \"-\" is really no solution at all. Making a special case for\n>> \"-\" just increases the potential for confusion, not \n>> decreases it, IMHO.\n>> \nThis is even more of a reason to remove it from where it is currently. The\ncode in the parser now is explicitly for a minus. I think an idea might be\nto check out some other SQL scanner implementations, and see how they do it.\nI will also speak to Vern, and see if he can shed any light on the matter.\n\nMikeA\n",
"msg_date": "Fri, 3 Sep 1999 09:32:38 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer "
}
] |
[
{
"msg_contents": "Leon wrote:\n>> Ok. Especially if there are more unary operators (I always wondered\n>> what unary % in gram.y stands for :) it is reasonable not to make\n>> a special case of uminus and slightly change the old behavior. That\n>> is even more convincing that constructs like 3+-2 and 3+-b were \n>> parsed in different way, and, what is worse, a>-2 and a>-b also\n>> parsed differently. So let us ask the (hopefully) last question:\n>> Thomas (Lockhart), do you agree on always parsing constructs like\n>> '+-' or '>-' as is, and not as '+' '-' or '>' '-' ?\nThis construct doesn't always make sense. It should only be recognised as a\n'>-' if that operator exists, otherwise it should be either generate an\nerror (which is reasonable because of the ambiguity that it creates (not for\nthis operator, but for the general case)), or try to complete (if that's\npossible). I have a bit of a problem with reading this: a > -2 correctly,\nwhile not reading this: a>-2 correctly, because that implies that you are\nusing the space as a precedence operator. This should be done by braces.\nThis: a > (-2) is totally unambiguous, spaces or no spaces.\n\nPerhaps there is a general case for where unary operators are allowed to\nappear, and we can use this, e.g.: they can only appear at the beginning of\nan expression, or immediately after another operator (ignoring spaces).\nThis means that >- will be scanned as an operator if it exists, or a >\nfollowed by a unary minus if >- doesn't exist as an operator. And this\nremoves some ambiguity, because now we have a defined rule: if the - doesn't\nappear at the beginning of an expression, or immediately (ignoring spaces)\nafter another operator, then it must be a binary minus.\n\n\n\nMikeA\n",
"msg_date": "Fri, 3 Sep 1999 09:46:39 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer"
},
{
"msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> I have a bit of a problem with reading this: a > -2 correctly,\n> while not reading this: a>-2 correctly, because that implies that you are\n> using the space as a precedence operator. This should be done by braces.\n\nNot at all: this is a strictly lexical issue (where do we divide the\ninput into tokens) and whitespace has been considered a reasonable\nlexical separator for years. Furthermore, SQL already depends on\nwhitespace to separate tokens that are made of letters and digits.\nYou can't spell \"SELECT\" as \"SEL ECT\", nor \"SELECT f1\" as \"SELECTf1\",\nnor does \"SELECT 1 2;\" mean \"SELECT 12;\". So it seems perfectly\nreasonable to me to use whitespace to separate operator names when\nthere would otherwise be ambiguity about what's meant.\n\n> This: a > (-2) is totally unambiguous, spaces or no spaces.\n\nTrue, and there's nothing to stop you from writing that style if you\nprefer it.\n\n> Perhaps there is a general case for where unary operators are allowed to\n> appear, and we can use this, e.g.: they can only appear at the beginning of\n> an expression, or immediately after another operator (ignoring spaces).\n\nDon't forget about right-unary operators...\n\n> This means that >- will be scanned as an operator if it exists, or a >\n> followed by a unary minus if >- doesn't exist as an operator.\n\nI think it would be a really bad idea for the lexical analysis to depend\non whether or not particular operator names are defined, for the same\nreasons that lexical analysis of word tokens doesn't depend on whether\nthere are keywords/table names/field names that match those tokens.\nYou get into circularity problems very quickly if you do that.\nLanguage designers learned not to do that in the sixties...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 10:59:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I think it would be a really bad idea for the lexical analysis to depend\n> on whether or not particular operator names are defined, for the same\n> reasons that lexical analysis of word tokens doesn't depend on whether\n> there are keywords/table names/field names that match those tokens.\n\n101% correct :)\n\n> You get into circularity problems very quickly if you do that.\n> Language designers learned not to do that in the sixties...\n> \n\nAll that should be carved in stone and then erected as a monument :)\nIt is a good idea to explicitly state where and how to divide \nfunctions amongst components - though it places some (minor) \nrestrictions, it introduces an conceivable order, which one can\nabide by. E.g. no semantics is allowed in lexer. Even unary minus\nin numbers is semantics and isn't proper for lexer. \n\n-- \nLeon.\n\n",
"msg_date": "Fri, 03 Sep 1999 20:33:28 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer"
}
] |
[
{
"msg_contents": "> But if it is correct, then we need to turn off oprcanhash for bpchareq.\n> Odd that no one has noticed this before.\n\nCurrently it works for constants, because they are blank padded.\nIt does not work for the char(8) = char(16) comparison with two\ntable columns.\n\nEighter the hash function for bpchar itself should be trailing blank \ninsensitive, or the bpchar would need to be padded or truncated before \ncomputing the hash.\n\nAndreas\n",
"msg_date": "Fri, 03 Sep 1999 10:00:37 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] SELECT BUG "
},
{
"msg_contents": "Andreas Zeugswetter <[email protected]> writes:\n>> But if it is correct, then we need to turn off oprcanhash for bpchareq.\n>> Odd that no one has noticed this before.\n\n> Currently it works for constants, because they are blank padded.\n> It does not work for the char(8) = char(16) comparison with two\n> table columns.\n\nThe case that can fail is equality across two different-width char\ncolumns from different tables. If they're in the same table, or if\nit's field = constant, then no join is involved so there's no risk\nof hashjoin being used. So this might be a relatively rare case\nafter all. And I think the 6.5 optimizer is more prone to choose\nhashjoin than it was in prior releases. Maybe it's not so odd that\nthis wasn't noticed sooner.\n\n> Eighter the hash function for bpchar itself should be trailing blank\n> insensitive, or the bpchar would need to be padded or truncated before\n> computing the hash.\n\nWe don't currently have datatype-dependent hash functions; it's one-\nsize-fits-all. I thought a little bit about adding type-specific\nhashing back when I did the last round of removing unsafe \"oprcanhash\"\nmarks, but it didn't seem worth the trouble. Unfortunately I missed\nbpchareq because I didn't know it does blank stripping...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 11:20:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] SELECT BUG "
}
] |
[
{
"msg_contents": "I've got trouble in a *clean* checkout of this morning's current tree.\nSomething in the system tables is not quite consistant:\n\npostgres=> \\d\nERROR: nodeRead: Bad type 0\n\nAlso, I'm getting failures in regression tests:\n\nconstraints .. failed\n>From a bunch of similar messages; related to above symptom?:\nERROR: nodeRead: Bad type 0\n\nsanity_check .. failed\nOne extra message during first vacuum:\nERROR: nodeRead: Bad type 0\n\nmisc .. failed\nDifferent ordering in output. Wonder why it changed?\n\nselect_views .. failed\nERROR: nodeRead: Bad type 0\n\nrules .. failed\nERROR: nodeRead: Bad type 0\n\nplpgsql .. failed\nFormatting for rows has changed in psql? Or something with plpgsql has\nchanged the contents? I haven't tried tracking this down.\n976c976\n< slotname |roomno |slotlink \n|backlink \n---\n> slotname | roomno| slotlink|backlink \n\nDo any other platforms run the tests without trouble? Does anyone have\na guess at when and/or what broke?? I'm running on linux-2.0.36/RH5.2\nusing the as-shipped compiler.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 03 Sep 1999 14:48:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "main tree is (slightly) damaged"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've got trouble in a *clean* checkout of this morning's current tree.\n> Something in the system tables is not quite consistant:\n> postgres=> \\d\n> ERROR: nodeRead: Bad type 0\n\nYipes. Perhaps there is some platform-dependency in the md.c changes\nI committed a couple days ago. I am not seeing any such problems,\nbut that doesn't prove a lot. How far back was the last version\nthat worked on your system?\n\n> Formatting for rows has changed in psql? Or something with plpgsql has\n> changed the contents? I haven't tried tracking this down.\n> 976c976\n> < slotname |roomno |slotlink |backlink \n> ---\n> > slotname | roomno| slotlink|backlink \n\nI made some changes in fe-print.c's code that decides whether a column\ncontains numeric data (and hence should be right justified), but it\ndidn't break the regress tests for me...\n\n> Do any other platforms run the tests without trouble?\n\nHPUX 9 here. This is a big-endian box; maybe a byte ordering issue?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 11:48:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] main tree is (slightly) damaged "
},
{
"msg_contents": "> > I've got trouble in a *clean* checkout of this morning's current tree.\n> > Something in the system tables is not quite consistant:\n> > postgres=> \\d\n> > ERROR: nodeRead: Bad type 0\n> Yipes. Perhaps there is some platform-dependency in the md.c changes\n> I committed a couple days ago. I am not seeing any such problems,\n> but that doesn't prove a lot. How far back was the last version\n> that worked on your system?\n> > Do any other platforms run the tests without trouble?\n> HPUX 9 here. This is a big-endian box; maybe a byte ordering issue?\n\npostgres=> select * from pg_tables;\nERROR: nodeRead: Bad type 0\n\nI still see a problem. Did a \"make clean install; initdb\", as well as\na clean checkout of the current source tree from cvsup.\n\nIs anyone else running regression tests besides the Toms? Can someone\nreport success on a linux box? Don't know where the problems are\ncoming from...\n\nI'm sure I ran the regression tests around the v6.5.1 release date,\nbut haven't been doing code development since then (working on docs\ninstead) so haven't been testing...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 08 Sep 1999 03:01:42 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] main tree is (slightly) damaged"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> postgres=> select * from pg_tables;\n> ERROR: nodeRead: Bad type 0\n\n> I still see a problem. Did a \"make clean install; initdb\", as well as\n> a clean checkout of the current source tree from cvsup.\n\nDrat. I was really hoping that you'd just forgotten initdb --- the\nparsetree changes I made a couple weeks ago could have explained this,\nbut not if you initdb'd.\n\nThe failure is presumably coming from an attempt to read a stored rule\nor default-value clause that's not stored in the format that the read\nprocedures are expecting. I'm guessing that there is a node write proc\nthat's not the inverse of the corresponding node read proc, and you\nhappen to have a rule or default that has the right kind of node in it\nto expose the bug.\n\nCould you burrow in with a debugger and find out more about the rule or\ndefault that's triggering the error?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 09:42:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] main tree is (slightly) damaged "
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> postgres=> select * from pg_tables;\n> ERROR: nodeRead: Bad type 0\n\nAh-hah. I was able to duplicate this problem on a local Linux box.\nIt seems that some implementations of vsnprintf() return -1 when the\ndata doesn't fit in the available space, rather than the size of the\navailable space as the man page specifies. This broke my recent\nrevision of stringinfo.c. Grumble. Will commit a fix shortly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 12:16:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] main tree is (slightly) damaged "
}
] |
[
{
"msg_contents": ">> \"Ansley, Michael\" <[email protected]> writes:\n>> > I have a bit of a problem with reading this: a > -2 correctly,\n>> > while not reading this: a>-2 correctly, because that implies that you\nare\n>> > using the space as a precedence operator. This should be done by\nbraces.\n>> \n>> Not at all: this is a strictly lexical issue (where do we divide the\n>> input into tokens) and whitespace has been considered a reasonable\n>> lexical separator for years. Furthermore, SQL already depends on\n>> whitespace to separate tokens that are made of letters and digits.\n>> You can't spell \"SELECT\" as \"SEL ECT\", nor \"SELECT f1\" as \"SELECTf1\",\n>> nor does \"SELECT 1 2;\" mean \"SELECT 12;\". So it seems perfectly\n>> reasonable to me to use whitespace to separate operator names when\n>> there would otherwise be ambiguity about what's meant.\nPoint taken. So, if the spaces are used, then a>-2 is not the same as a>-\n2. The latter should then generate an error, right?\n\n<snip>\n\n>> I think it would be a really bad idea for the lexical \n>> analysis to depend on whether or not particular operator names \n>> are defined, for the same reasons that lexical analysis of word \n>> tokens doesn't depend on whether\n>> there are keywords/table names/field names that match those tokens.\n>> You get into circularity problems very quickly if you do that.\n>> Language designers learned not to do that in the sixties...\nYes. Another point taken.\n\nMikeA\n",
"msg_date": "Fri, 3 Sep 1999 17:03:05 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer "
},
{
"msg_contents": "> Point taken. So, if the spaces are used, then a>-2 is not the same as a>-\n> 2. The latter should then generate an error, right?\n\nIt wasn't real clear where you intended to insert whitespace in this\nexample... but in any case, it might or might not generate an error\ndepending on what operators have been defined. Both \"a >- 2\" (three\ntokens) and \"a > - 2\" (four tokens) might be legal expressions.\nIf they are not, it's not the lexer's job to figure that out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 11:35:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgres' lexer "
}
] |
[
{
"msg_contents": "If an elog(ERROR) occurs while the reference count of a relcache entry\nis positive, there is no mechanism to set the reference count back to\nzero during abort cleanup. Result: the relcache entry has a permanent\npositive refcnt.\n\nThis has a number of bad effects, but one of the worst is that\nRelationFlushRelation won't flush the relcache entry if called\ndue to an sinval update signal received during a transaction\n(because RelationIdInvalidateRelationCacheByRelationId will set\nonlyFlushReferenceCountZero to TRUE). This bug can be exhibited\nwithout even using more than one backend. With REL6_5 sources:\n\nregression=> create table zz (f1 int4);\nCREATE\nregression=> update zz set f1 = 0;\nUPDATE 0\nregression=> alter table zz add column f2 int4;\nADD\nregression=> update zz set f2 = 0;\nUPDATE 0\n-- so far so good, but now trigger a deliberate error at a place where\n-- f3 will be open:\nregression=> update zz set f3 = 0;\nERROR: Relation 'zz' does not have attribute 'f3'\n-- now f3 has refcnt 1, so the sinval resulting from this ALTER TABLE\n-- will fail to flush the relcache entry:\nregression=> alter table zz add column f3 int4;\nADD\n-- with the result that the backend still thinks zz has two attributes:\nregression=> update zz set f3 = 0;\nERROR: Relation 'zz' does not have attribute 'f3'\nregression=> select * from zz;\nf1|f2\n--+--\n(0 rows)\n\n\nI believe that relcache.c should provide a routine (to be called during\nabort cleanup) that will reset the refcnts of all relcache entries to\n0 for a normal rel and 1 for a nailed-in rel. It might be a good idea\nto call this routine during normal xact commit, too, just in case\nsomeone forgets to decrement a refcnt they've incremented.\n\nMy earlier idea of having RelationFlushRelation rebuild the cache entry\nif it couldn't flush it would also mask this particular symptom. But\nthat wouldn't cure the gradual relcache growth that is likely to occur\nbecause of refcnt leakage. I still think we probably want to do that\ntoo, though.\n\nComments? Better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 1999 18:42:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "relcache.c leaks refcnts"
},
{
"msg_contents": "> I believe that relcache.c should provide a routine (to be called during\n> abort cleanup) that will reset the refcnts of all relcache entries to\n> 0 for a normal rel and 1 for a nailed-in rel. It might be a good idea\n> to call this routine during normal xact commit, too, just in case\n> someone forgets to decrement a refcnt they've incremented.\n> \n> My earlier idea of having RelationFlushRelation rebuild the cache entry\n> if it couldn't flush it would also mask this particular symptom. But\n> that wouldn't cure the gradual relcache growth that is likely to occur\n> because of refcnt leakage. I still think we probably want to do that\n> too, though.\n\nYes, amazing it was this broken. Clearing it makes sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 5 Sep 1999 22:34:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] relcache.c leaks refcnts"
}
] |
[
{
"msg_contents": "Hi all again,\n\nTom, _thanks_ for information! I really appreciated it. I managed to\ncreate a function which can be used to append or delete elems in an\ninteger array. I needed it because I wanted to somehow solve the\nuser-group relationship problems with Postgres.\n\nIf you are interested, I could extend its functionality to other types\nas well and send this small pack to you. For this, however I should know\nmore about how Postgres manages types internally...\n\nRegards,\nPeter Blazso\n\n",
"msg_date": "Sat, 04 Sep 1999 00:57:29 +0200",
"msg_from": "Peter Blazso <[email protected]>",
"msg_from_op": true,
"msg_subject": "array manipulations"
},
{
"msg_contents": "Peter Blazso <[email protected]> writes:\n> If you are interested, I could extend its functionality to other types\n> as well and send this small pack to you. For this, however I should know\n> more about how Postgres manages types internally...\n\nI think it should be possible to make a type-independent version of that\ncode, and if you want to do so it'd be a great extension.\n\nHow does it look to the user? Something like\n\n\tUPDATE table SET arrayfield = arrayInsert(arrayfield, index, newval)\n\n\tUPDATE table SET arrayfield = arrayDelete(arrayfield, index)\n\nI suppose? What do you do about multi-dimensional arrays?\n\nOne thing a number of people have complained about is that the natural\nway to extend an array is\n\n\tUPDATE table SET arrayfield[n+1] = newval\n\nif arrayfield currently has n entries. The array assignment code ought\nto handle this case but doesn't. I don't think it would be a huge fix\nbut I haven't looked at the code enough to understand what would need\nto change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 11:08:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] array manipulations "
},
{
"msg_contents": "Tom Lane wrote:\n\n> I think it should be possible to make a type-independent version of that\n> code, and if you want to do so it'd be a great extension.\n\nI should have much more spare time but I'll try... :-)\n\n> How does it look to the user? Something like\n>\n> UPDATE table SET arrayfield = arrayInsert(arrayfield, index, newval)\n> UPDATE table SET arrayfield = arrayDelete(arrayfield, index)\n\nNot exactly. My functions don't use indices yet and they still work only on one\ndimensional 'int' arrays. You can insert a new value only as the last elem and\ndelete all values from the array that match a given integer. They can be used\nlike below:\n\n UPDATE table SET arrayfield = array_app_int(arrayfield, newval);\n UPDATE table SET arrayfield = array_del_int(arrayfield, matchval);\n\nThis time I focused only on user additions/deletions to/from a group in\n'pg_group', however the code can be extended easily to do other things as well.\n\n> What do you do about multi-dimensional arrays?\n\nUnfortunately nothing, yet. This code is still an intro into array\nmanipulations.\n\n> One thing a number of people have complained about is that the natural\n> way to extend an array is\n>\n> UPDATE table SET arrayfield[n+1] = newval\n>\n> if arrayfield currently has n entries. The array assignment code ought\n> to handle this case but doesn't. I don't think it would be a huge fix\n> but I haven't looked at the code enough to understand what would need\n> to change.\n\nIf you want I can send you these functions accompanied by some notes and\ncomments I just did ...\n\nPeter Blazso\n\n\n",
"msg_date": "Sat, 04 Sep 1999 21:27:42 +0200",
"msg_from": "Peter Blazso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] array manipulations"
}
] |
[
{
"msg_contents": "I found weird behavior with temp tables.\n\ntest=> create table u1(i int);\nCREATE\ntest=> insert into u1 values(1);\nINSERT 3408201 1\ntest=> insert into u1 values(1);\nINSERT 3408202 1\ntest=> create temp table u1(i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\nNOTICE: trying to delete a reldesc that does not exist.\nNOTICE: trying to delete a reldesc that does not exist.\nCREATE\n\nAre these notices normal?\n\nNext I exited the session and start psql again.\n\ntest=> \nEOF\n[t-ishii@ext16 Chapter3]$ !!\npsql test\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on powerpc-unknown-linux-gnu, compiled by gcc egcs-2.90.25 980302 (egcs-1.0.2 prerelease)]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: test\n\ntest=> create temp table u1(i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\nERROR: Cannot create unique index. Table contains non-unique values\n\nWhat's this? I thought temp tables completely mask persistent tables.\n---\nTatsuo Ishii\n\n",
"msg_date": "Sat, 04 Sep 1999 23:04:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "temp table oddness?"
},
{
"msg_contents": "> I found weird behavior with temp tables.\n> \n> test=> create table u1(i int);\n> CREATE\n> test=> insert into u1 values(1);\n> INSERT 3408201 1\n> test=> insert into u1 values(1);\n> INSERT 3408202 1\n> test=> create temp table u1(i int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> CREATE\n> \n> Are these notices normal?\n\nNot normal. This works:\n\t\n\ttest=> create table u1(i int);\n\tCREATE\n\ttest=> insert into u1 values(1);\n\tINSERT 18697 1\n\ttest=> insert into u1 values(1);\n\tINSERT 18698 1\n\ttest=> create temp table u1(i int);\n\tCREATE\n\ttest=> create unique index i_u1 on u1(i);\n\tCREATE\n\nBacktrace shows:\n\t\n\t#0 elog (lev=0, \n\t fmt=0x81700e7 \"trying to delete a reldesc that does not exist.\")\n\t at elog.c:75\n\t#1 0x812a1f6 in RelationFlushRelation (relationPtr=0x8043510, \n\t onlyFlushReferenceCountZero=0) at relcache.c:1262\n\t#2 0x812a6c8 in RelationPurgeLocalRelation (xactCommitted=1 '\\001')\n\t at relcache.c:1533\n\t#3 0x8086c3f in CommitTransaction () at xact.c:954\n\t#4 0x8086e2c in CommitTransactionCommand () at xact.c:1172\n\t#5 0x80ff559 in PostgresMain (argc=4, argv=0x80475a8, real_argc=4, \n\t real_argv=0x80475a8) at postgres.c:1654\n\t#6 0x80b619c in main (argc=4, argv=0x80475a8) at main.c:102\n\t#7 0x80607fc in __start ()\n\nWhat I don't understand why the PRIMARY is different than creating the\nindex manually... OK, got the reason:\n\t\n\ttest=> create table u1(i int);\n\tCREATE\n\ttest=> insert into u1 values(1);\n\tINSERT 18889 1\n\ttest=> insert into u1 values(1);\n\tINSERT 18890 1\n\ttest=> begin;\n\tBEGIN\n\ttest=> create temp table u1(i int);\n\tCREATE\n\ttest=> create unique index i_u1 on u1(i);\n\tCREATE\n\ttest=> end;\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tEND\n\nThe cause is that the index creation is happening in the same\ntransaction as the create of the temp table. Any comments on a cause?\nTom Lane's cache changes may address this.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 10:57:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I found weird behavior with temp tables.\n> test=> create table u1(i int);\n> CREATE\n> test=> insert into u1 values(1);\n> INSERT 3408201 1\n> test=> insert into u1 values(1);\n> INSERT 3408202 1\n> test=> create temp table u1(i int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> CREATE\n\n> Are these notices normal?\n\nNo --- looks like something wrong with relcache shared-invalidation.\nFWIW, they do not occur with the new relcache code I'm currently\ntesting. Hope to commit this stuff today.\n\n> Next I exited the session and start psql again.\n\n> test=> create temp table u1(i int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> ERROR: Cannot create unique index. Table contains non-unique values\n\n> What's this? I thought temp tables completely mask persistent tables.\n\nI still get this one, however. Odd. Apparently, a temp table will\nsuccessfully mask a regular table created earlier in the same psql\nsession, but *not* one that's been created in a different psql session.\nOver to you, Bruce...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 11:35:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "Man, this example has bugs just crawling all over it.\n\nI did\n1. create plain table u1, insert \"1\" twice\n2. start new backend\n3.\nregression=> create temp table u1(i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\nERROR: Cannot create unique index. Table contains non-unique values\nregression=> drop table u1;\nDROP\nregression=> \\q\n\nAlthough psql quits cleanly enough, the underlying backend has\ncoredumped, as you will discover if you have any other active backends.\nThe dump is an assert failure at\n\n#6 0x15bb1c in ExceptionalCondition (\n conditionName=0x2fdcc \"!((bool)((void*)(tuple) != ((void *)0)))\",\n exceptionP=0x40009a58, detail=0x0, fileName=0x7ae4 \"\\003\", lineNumber=1127)\n at assert.c:72\n#7 0x9c4a0 in index_destroy (indexId=150537) at index.c:1127\n#8 0x15b8f0 in remove_all_temp_relations () at temprel.c:97\n#9 0x113f64 in shmem_exit (code=0) at ipc.c:190\n#10 0x113e64 in proc_exit (code=0) at ipc.c:136\n#11 0x12244c in PostgresMain (argc=5, argv=0x40003090, real_argc=5,\n real_argv=0x7b033324) at postgres.c:1614\n\nApparently, temp index creation registers the temp index with temprel.c\nbefore the index is filled. Then, the \"duplicate values\" error aborts\ncreation of the index --- but the entry in temprel.c's list is still\nthere. When remove_all_temp_relations tries to delete the index,\nkaboom.\n\nAlthough this particular error presumably won't be possible after we\nfix the problem that the index is looking at the wrong underlying table,\nthere are other possible errors in index creation, so I think we gotta\ndeal with this problem too.\n\nA quick and dirty fix might be to make index_destroy return quietly\nif it's asked to destroy a nonexistent index. A better fix would be\nto make remove_all_temp_relations check whether the rel it's trying to\ndestroy still exists --- this should happen for plain rels as well as\nindexes, probably, since heap_destroy_with_catalog doesn't like being\nasked to destroy a nonexistent table either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 11:57:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "Here's another case that doesn't work too well:\n\nregression=> create table u1(i int);\nCREATE\nregression=> insert into u1 values(1);\nINSERT 150665 1\nregression=> insert into u1 values(1);\nINSERT 150666 1\nregression=> create temp table u1(i int);\nCREATE\nregression=> create unique index i_u1 on u1(i);\nCREATE\nregression=> select * from u1;\t\t-- yup, temp table is empty\ni\n-\n(0 rows)\n\nregression=> drop table u1;\t\t-- drop temp table\nDROP\nregression=> select * from u1;\t\t-- ok, we're back to permanent u1\ni\n-\n1\n1\n(2 rows)\n\nregression=> begin;\nBEGIN\nregression=> create temp table u1(i int);\nCREATE\nregression=> create unique index i_u1 on u1(i);\nERROR: Cannot create index: 'i_u1' already exists\n-- apparently, dropping a temp table doesn't drop its temp indexes?\nregression=> end;\nEND\nregression=> select * from u1;\nERROR: cannot find attribute 1 of relation pg_temp.24335.3\n-- oops, what's causing this? Shouldn't the xact have been rolled back\n-- due to error?\nregression=> \\q\n-- backend coredumps on quit\n\n\nLooks like indexes on temp tables need some serious work :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 12:10:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "> Here's another case that doesn't work too well:\n> \n> regression=> create table u1(i int);\n> CREATE\n> regression=> insert into u1 values(1);\n> INSERT 150665 1\n> regression=> insert into u1 values(1);\n> INSERT 150666 1\n> regression=> create temp table u1(i int);\n> CREATE\n> regression=> create unique index i_u1 on u1(i);\n> CREATE\n> regression=> select * from u1;\t\t-- yup, temp table is empty\n> i\n> -\n> (0 rows)\n> \n> regression=> drop table u1;\t\t-- drop temp table\n> DROP\n> regression=> select * from u1;\t\t-- ok, we're back to permanent u1\n> i\n> -\n> 1\n> 1\n> (2 rows)\n\nGee, I was doing so well up to this point.\n\n> \n> regression=> begin;\n> BEGIN\n> regression=> create temp table u1(i int);\n> CREATE\n> regression=> create unique index i_u1 on u1(i);\n> ERROR: Cannot create index: 'i_u1' already exists\n> -- apparently, dropping a temp table doesn't drop its temp indexes?\n> regression=> end;\n> END\n> regression=> select * from u1;\n> ERROR: cannot find attribute 1 of relation pg_temp.24335.3\n> -- oops, what's causing this? Shouldn't the xact have been rolled back\n> -- due to error?\n> regression=> \\q\n> -- backend coredumps on quit\n> \n> \n> Looks like indexes on temp tables need some serious work :-(\n\nIt is the existance of the temp in transactions that is causing a\nproblem.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 12:13:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "> Man, this example has bugs just crawling all over it.\n> \n> I did\n> 1. create plain table u1, insert \"1\" twice\n> 2. start new backend\n> 3.\n> regression=> create temp table u1(i int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> ERROR: Cannot create unique index. Table contains non-unique values\n> regression=> drop table u1;\n> DROP\n> regression=> \\q\n\nAgain, it is because the index is done in the same transaction.\n\n> \n> Although psql quits cleanly enough, the underlying backend has\n> coredumped, as you will discover if you have any other active backends.\n> The dump is an assert failure at\n> \n> #6 0x15bb1c in ExceptionalCondition (\n> conditionName=0x2fdcc \"!((bool)((void*)(tuple) != ((void *)0)))\",\n> exceptionP=0x40009a58, detail=0x0, fileName=0x7ae4 \"\\003\", lineNumber=1127)\n> at assert.c:72\n> #7 0x9c4a0 in index_destroy (indexId=150537) at index.c:1127\n> #8 0x15b8f0 in remove_all_temp_relations () at temprel.c:97\n> #9 0x113f64 in shmem_exit (code=0) at ipc.c:190\n> #10 0x113e64 in proc_exit (code=0) at ipc.c:136\n> #11 0x12244c in PostgresMain (argc=5, argv=0x40003090, real_argc=5,\n> real_argv=0x7b033324) at postgres.c:1614\n> \n> Apparently, temp index creation registers the temp index with temprel.c\n> before the index is filled. Then, the \"duplicate values\" error aborts\n> creation of the index --- but the entry in temprel.c's list is still\n> there. When remove_all_temp_relations tries to delete the index,\n> kaboom.\n\nYep. Wouldn't the best way be to have the temp system record the\ntransaction id used, and to invalidate all temp entries associated with\nan aborted transaction. That is how the cache code works, so it seems\nit should be extended to the temp code.\n\n> \n> Although this particular error presumably won't be possible after we\n> fix the problem that the index is looking at the wrong underlying table,\n> there are other possible errors in index creation, so I think we gotta\n> deal with this problem too.\n> \n> A quick and dirty fix might be to make index_destroy return quietly\n> if it's asked to destroy a nonexistent index. A better fix would be\n> to make remove_all_temp_relations check whether the rel it's trying to\n> destroy still exists --- this should happen for plain rels as well as\n> indexes, probably, since heap_destroy_with_catalog doesn't like being\n> asked to destroy a nonexistent table either.\n\nI say let's leave these alone. Their aborting on removal of\nnon-existant stuff helps us see bugs that could be masked by proposed\nfix.\n\nThe temp table code was very small, and relies on the cache code, and\nthe fact all relname lookups happen through the cache. I am suprised it\nhas worked as well as it has.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 12:16:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yep. Wouldn't the best way be to have the temp system record the\n> transaction id used, and to invalidate all temp entries associated with\n> an aborted transaction. That is how the cache code works, so it seems\n> it should be extended to the temp code.\n\nYeah, that would work -- add an xact abort cleanup routine that goes\nthrough the temprel list and removes entries added during the current\ntransaction.\n\nAFAICS this only explains the coredump-at-exit business, though.\nI'm particularly baffled by that\nERROR: cannot find attribute 1 of relation pg_temp.24335.3\nin my last example --- do you understand why that's happening?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 12:22:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yep. Wouldn't the best way be to have the temp system record the\n> > transaction id used, and to invalidate all temp entries associated with\n> > an aborted transaction. That is how the cache code works, so it seems\n> > it should be extended to the temp code.\n> \n> Yeah, that would work -- add an xact abort cleanup routine that goes\n> through the temprel list and removes entries added during the current\n> transaction.\n> \n> AFAICS this only explains the coredump-at-exit business, though.\n> I'm particularly baffled by that\n> ERROR: cannot find attribute 1 of relation pg_temp.24335.3\n> in my last example --- do you understand why that's happening?\n\nWho knows. Once it gets messed up, anything can happen. The problem\nwith indexes created in the same transaction as the temp table still is\na problem, though you say your new cache code fixes that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 12:26:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Who knows. Once it gets messed up, anything can happen. The problem\n> with indexes created in the same transaction as the temp table still is\n> a problem, though you say your new cache code fixes that.\n\nNo, I didn't say that. The weird \"notice\" isn't coming out any more,\nbut I'm still seeing all these other bugs. It looks to me like there\nare problems with ensuring that an index on a temp table is (a) temp\nitself, and (b) built against the temp table and not a permanent table\nof the same name.\n\nI don't really understand how temp tables are implemented and whether\nrelcache.c needs to be aware of them --- is there documentation\nsomewhere?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 12:44:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Who knows. Once it gets messed up, anything can happen. The problem\n> > with indexes created in the same transaction as the temp table still is\n> > a problem, though you say your new cache code fixes that.\n> \n> No, I didn't say that. The weird \"notice\" isn't coming out any more,\n> but I'm still seeing all these other bugs. It looks to me like there\n> are problems with ensuring that an index on a temp table is (a) temp\n> itself, and (b) built against the temp table and not a permanent table\n> of the same name.\n\nI thought this worked. In the regression tests, temp.sql has:\n\n\tCREATE TABLE temptest(col int);\n\t\n\tCREATE INDEX i_temptest ON temptest(col);\n\t\n\tCREATE TEMP TABLE temptest(col int);\n\t\n\tCREATE INDEX i_temptest ON temptest(col);\n\t\n\tDROP INDEX i_temptest;\n\t\n\tDROP TABLE temptest;\n\t\n\tDROP INDEX i_temptest;\n\t\n\tDROP TABLE temptest;\n\nand works fine.\n\n> \n> I don't really understand how temp tables are implemented and whether\n> relcache.c needs to be aware of them --- is there documentation\n> somewhere?\n\nThat's a joke, right? :-)\n\ntemprel.c has:\n\n/*\n * This implements temp tables by modifying the relname cache lookups\n * of pg_class.\n * When a temp table is created, a linked list of temp table tuples is\n * stored here. When a relname cache lookup is done, references to user-named\n * temp tables are converted to the internal temp table names.\n *\n */\n\nget_temp_rel_by_name() is the workhorse. You can see the call to it in\nClassNameIndexScan(), which make the cache think the temp rel is a real\nrelation, and not just a temp one. Other access to the relation via oid\nremain the same.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 13:27:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Yep. Wouldn't the best way be to have the temp system record the\n> > transaction id used, and to invalidate all temp entries associated with\n> > an aborted transaction. That is how the cache code works, so it seems\n> > it should be extended to the temp code.\n> \n> Yeah, that would work -- add an xact abort cleanup routine that goes\n> through the temprel list and removes entries added during the current\n> transaction.\n> \n> AFAICS this only explains the coredump-at-exit business, though.\n> I'm particularly baffled by that\n> ERROR: cannot find attribute 1 of relation pg_temp.24335.3\n> in my last example --- do you understand why that's happening?\n> \n> \t\t\tregards, tom lane\n> \n\nI have added temp invalidation code for aborted transactions:\n\n---------------------------------------------------------------------------\n\nOld behavour:\n\t\n\ttest=> begin;\n\tBEGIN\n\ttest=> create temp table test (x int);\n\tCREATE\n\ttest=> create index i_test on test(x);\n\tCREATE\n\ttest=> abort;\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tABORT\n\ttest=> create temp table test (x int);\n\tERROR: Relation 'test' already exists\n\n---------------------------------------------------------------------------\n\nNew behavour:\n\n\ttest=> begin;\n\tBEGIN\n\ttest=> create temp table test (x int);\n\tCREATE\n\ttest=> create index i_test on test(x);\n\tCREATE\n\ttest=> abort;\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tNOTICE: trying to delete a reldesc that does not exist.\n\tABORT\n\ttest=> create temp table test(x int);\n\tCREATE\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 15:52:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have added temp invalidation code for aborted transactions:\n\n> New behavour:\n\n> \ttest=> begin;\n> \tBEGIN\n> \ttest=> create temp table test (x int);\n> \tCREATE\n> \ttest=> create index i_test on test(x);\n> \tCREATE\n> \ttest=> abort;\n> \tNOTICE: trying to delete a reldesc that does not exist.\n> \tNOTICE: trying to delete a reldesc that does not exist.\n> \tABORT\n> \ttest=> create temp table test(x int);\n> \tCREATE\n\nOK, cool. I think I know where to fix those \"NOTICES\", too:\nthe relcache indexes temp relations by their real names, so\nRelationNameGetRelation() ought to substitute the real name before\nprobing the cache. As it stands you wind up with two relcache entries\nfor the temp table, which is bad. Working on it now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 17:13:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "> I found weird behavior with temp tables.\n> \n> test=> create table u1(i int);\n> CREATE\n> test=> insert into u1 values(1);\n> INSERT 3408201 1\n> test=> insert into u1 values(1);\n> INSERT 3408202 1\n> test=> create temp table u1(i int primary key);\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> CREATE\n> \n> Are these notices normal?\n\nOK, looks fixed. Tatsuo, please test current cvs tree. Thanks.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 18:14:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "> > I found weird behavior with temp tables.\n> > \n> > test=> create table u1(i int);\n> > CREATE\n> > test=> insert into u1 values(1);\n> > INSERT 3408201 1\n> > test=> insert into u1 values(1);\n> > INSERT 3408202 1\n> > test=> create temp table u1(i int primary key);\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > CREATE\n> > \n> > Are these notices normal?\n> \n> OK, looks fixed. Tatsuo, please test current cvs tree. Thanks.\n\nLet me add Tom Lane did much of the fixing too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 18:21:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
},
{
"msg_contents": "OK, I think that set of issues is solved. All the temp-table examples\nTatsuo and I gave this morning work with the current sources, and I\nthink shared invalidation of relcache entries is pretty solid too.\n\nWhat we have at this point is a set of tightly interwoven changes in\nrelcache.c, temprel.c, sinval.c, and the syscache stuff. If we want to\ncommit these changes into 6.5.*, it's all-or-nothing; I don't think we\ncan extract just part of the changes. I'm real hesitant to do that.\nThese are good fixes, I believe, but I don't yet trust 'em enough to put\ninto a stable release. Can we live with the temp table misbehaviors as\n\"known bugs\" for 6.5.* ?\n\nThe other thing we'd have to do if we don't back-patch these changes\nis remove the FileUnlink call in mdtruncate() in REL6_5, which would\nmean vacuum still won't remove excess segment files in 6.5.*. It would\ntruncate 'em to zero length, though, so the deficiency isn't horrible\nAFAICS.\n\nMy inclination is to do that, and leave the other problems as unfixed\nbugs for REL6_5. The alternative would be to back-patch all these\nchanges and delay 6.5.2 release for a while while people beta-test.\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Sep 1999 18:33:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "> OK, I think that set of issues is solved. All the temp-table examples\n> Tatsuo and I gave this morning work with the current sources, and I\n> think shared invalidation of relcache entries is pretty solid too.\n> \n> What we have at this point is a set of tightly interwoven changes in\n> relcache.c, temprel.c, sinval.c, and the syscache stuff. If we want to\n> commit these changes into 6.5.*, it's all-or-nothing; I don't think we\n> can extract just part of the changes. I'm real hesitant to do that.\n> These are good fixes, I believe, but I don't yet trust 'em enough to put\n> into a stable release. Can we live with the temp table misbehaviors as\n> \"known bugs\" for 6.5.* ?\n\nI have already cast my vote for leaving them out of 6.5.*.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Sep 1999 19:03:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?u"
},
{
"msg_contents": "> > I found weird behavior with temp tables.\n> > \n> > test=> create table u1(i int);\n> > CREATE\n> > test=> insert into u1 values(1);\n> > INSERT 3408201 1\n> > test=> insert into u1 values(1);\n> > INSERT 3408202 1\n> > test=> create temp table u1(i int primary key);\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey' for table 'u1'\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > CREATE\n> > \n> > Are these notices normal?\n> \n> OK, looks fixed. Tatsuo, please test current cvs tree. Thanks.\n\nNow problems I was complaining have gone. Thanks!\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 05 Sep 1999 11:24:14 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] temp table oddness? "
},
{
"msg_contents": "I'm interested in learning how to hack any suggestions how to go about it?\nBruce Momjian <[email protected]> wrote in message\nnews:[email protected]...\n> > I found weird behavior with temp tables.\n> >\n> > test=> create table u1(i int);\n> > CREATE\n> > test=> insert into u1 values(1);\n> > INSERT 3408201 1\n> > test=> insert into u1 values(1);\n> > INSERT 3408202 1\n> > test=> create temp table u1(i int primary key);\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'u1_pkey'\nfor table 'u1'\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > NOTICE: trying to delete a reldesc that does not exist.\n> > CREATE\n> >\n> > Are these notices normal?\n>\n> Not normal. This works:\n>\n> test=> create table u1(i int);\n> CREATE\n> test=> insert into u1 values(1);\n> INSERT 18697 1\n> test=> insert into u1 values(1);\n> INSERT 18698 1\n> test=> create temp table u1(i int);\n> CREATE\n> test=> create unique index i_u1 on u1(i);\n> CREATE\n>\n> Backtrace shows:\n>\n> #0 elog (lev=0,\n> fmt=0x81700e7 \"trying to delete a reldesc that does not exist.\")\n> at elog.c:75\n> #1 0x812a1f6 in RelationFlushRelation (relationPtr=0x8043510,\n> onlyFlushReferenceCountZero=0) at relcache.c:1262\n> #2 0x812a6c8 in RelationPurgeLocalRelation (xactCommitted=1 '\\001')\n> at relcache.c:1533\n> #3 0x8086c3f in CommitTransaction () at xact.c:954\n> #4 0x8086e2c in CommitTransactionCommand () at xact.c:1172\n> #5 0x80ff559 in PostgresMain (argc=4, argv=0x80475a8, real_argc=4,\n> real_argv=0x80475a8) at postgres.c:1654\n> #6 0x80b619c in main (argc=4, argv=0x80475a8) at main.c:102\n> #7 0x80607fc in __start ()\n>\n> What I don't understand why the PRIMARY is different than creating the\n> index manually... OK, got the reason:\n>\n> test=> create table u1(i int);\n> CREATE\n> test=> insert into u1 values(1);\n> INSERT 18889 1\n> test=> insert into u1 values(1);\n> INSERT 18890 1\n> test=> begin;\n> BEGIN\n> test=> create temp table u1(i int);\n> CREATE\n> test=> create unique index i_u1 on u1(i);\n> CREATE\n> test=> end;\n> NOTICE: trying to delete a reldesc that does not exist.\n> NOTICE: trying to delete a reldesc that does not exist.\n> END\n>\n> The cause is that the index creation is happening in the same\n> transaction as the create of the temp table. Any comments on a cause?\n> Tom Lane's cache changes may address this.\n>\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 7 Sep 1999 05:52:09 +0200",
"msg_from": "\"flo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] temp table oddness?"
}
] |
[
{
"msg_contents": ">> \n>> > Point taken. So, if the spaces are used, then a>-2 is not \n>> the same as a>-\n>> > 2. The latter should then generate an error, right?\n>> \n>> It wasn't real clear where you intended to insert whitespace in this\n>> example... but in any case, it might or might not generate an error\n>> depending on what operators have been defined. Both \"a >- 2\" (three\n>> tokens) and \"a > - 2\" (four tokens) might be legal expressions.\n>> If they are not, it's not the lexer's job to figure that out.\nYes, and -2 is two tokens, whether it's - 2 or -2. Taking the unary minus\nand the number, and creating a single expression meaning two less than zero\nis not the lexer's job. Or is it? Or am I missing the plot?\n\nMikeA\n",
"msg_date": "Sat, 4 Sep 1999 20:47:14 +0200 ",
"msg_from": "\"Ansley, Michael\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Postgres' lexer "
}
] |
[
{
"msg_contents": "���������������������� ������������������ �������� ����������.\n���������������������� �������� ��������������������..������������ ��������������...!!\n^^",
"msg_date": "Sun, 5 Sep 1999 22:41:35 +0900 (KST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "=?EUC-KR?B?x8G3zrHXt6Ww+g==?= =?EUC-KR?B?sNTA0w==?=\n\t=?EUC-KR?B?uK69usauwNS0z7TZ?=."
}
] |
[
{
"msg_contents": " I downloaded and built PostgreSQL on my Sparc2 running RedHat 6.0 (I've\ndone this before on my RH 6.0 x86 box). The compile/installation went fine.\nWhen the system comes up it starts the DB using the contrib/linux init\nscript.\n When I su to postgres and try to create a user or database the system\njust hangs there. I turned on debugging and checked the logs. The last\nline of the logs, before the pause, is \"InitPostgres.\"\n At this point the postmaster starts eating up cpu cycles (about 75% to\n80%). Everything grinds to a halt. I havn't let the process continue\nbecause this takes but a few seconds on my intel box. I havn't let this\n\"InitPostgres\" process complete as it doesn't seem like it's ever going to\ncomplete.\n\n Anyone have any insight into this problem?\n\n Damond\n\n\n",
"msg_date": "Sun, 05 Sep 1999 15:10:12 GMT",
"msg_from": "\"Damond Walker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RH6.0/Sparc and PG 6.5.1"
}
] |
[
{
"msg_contents": "Pursuant to a phone conversation I had with Bruce, I added code this\nmorning to reject DROP TABLE or DROP INDEX inside a transaction block;\nthat is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\nfor rejecting this case is that we do the wrong thing if the transaction\nis later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\ntables will claim that foo is still valid (since the changes to them\nwere never committed) but we've already unlinked foo's physical file,\nand we can't get it back. Solution: only allow DROP TABLE outside\nBEGIN, so that the user can't try to change his mind later.\n\nHowever, on second thought I wonder if this cure is worse than the\ndisease. Will it be unreasonably hard to drop tables using client\ninterfaces that like to wrap everything in BEGIN/END? Plugging an\nobscure hole might not be worth that.\n\nA possible compromise is not to error out, but just to issue a NOTICE\nalong the lines of \"DROP TABLE is not undoable, so don't even think of\ntrying to abort now...\"\n\n(Of course, what would be really nice is if it just worked, but I don't\nsee any way to make that happen without major changes. Simply\npostponing the unlink to end of transaction isn't workable; consider\nBEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n\nAny thoughts? Will there indeed be a problem with JDBC or ODBC if we\nleave this error check in place?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Sep 1999 18:17:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "DROP TABLE inside transaction block"
},
{
"msg_contents": "> (Of course, what would be really nice is if it just worked, but I don't\n> see any way to make that happen without major changes. Simply\n> postponing the unlink to end of transaction isn't workable; consider\n> BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n\nCant you just rename to a unique name, maybee in another directory,\nsuchas:\n\n~pgsql/data/base/template1/sometable\n\nmoves to\n\n~pgsql/data/base/template1/pg_removals/postmasterpid/sometable\n\nAnd if there is an abort, move back, if there is an end, delete it.\n\nPossible?\n\n\t\t\t\t\tMichael Simms\n",
"msg_date": "Sun, 5 Sep 1999 23:47:37 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n>> (Of course, what would be really nice is if it just worked, but I don't\n>> see any way to make that happen without major changes. Simply\n>> postponing the unlink to end of transaction isn't workable; consider\n>> BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n\n> Cant you just rename to a unique name, maybee in another directory,\n\nNot if other backends are also accessing the table. Remember that to\nmake this really work, the DROP would have to be invisible to other\nbackends until commit.\n\nI think that to make this work correctly, we'd have to give up naming\ntable datafiles after the tables, and use a table's OID or some such\nas its file name. Ugly, and a pain in the neck for debugging and\nmaintenance. And we'd still need to postpone the unlink till commit.\n\nThe amount of work needed seems vastly more than the feature is worth...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Sep 1999 18:54:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "> \n> Michael Simms <[email protected]> writes:\n> >> (Of course, what would be really nice is if it just worked, but I don't\n> >> see any way to make that happen without major changes. Simply\n> >> postponing the unlink to end of transaction isn't workable; consider\n> >> BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n> \n> > Cant you just rename to a unique name, maybee in another directory,\n> \n> Not if other backends are also accessing the table. Remember that to\n> make this really work, the DROP would have to be invisible to other\n> backends until commit.\n\nCould you not then:\n\nsend a notification to all other backends\n\nPut something into the table header that any new backend that tries to use it\nis informed that the correct table is stored elsewhere.\n\nI dont know, Im just throwing ideas here {:-)\n\n\t\t\t\t\t\tMichael Simms\n",
"msg_date": "Mon, 6 Sep 1999 00:11:36 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Pursuant to a phone conversation I had with Bruce, I added code this\n> morning to reject DROP TABLE or DROP INDEX inside a transaction block;\n> that is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\n> for rejecting this case is that we do the wrong thing if the transaction\n> is later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\n> tables will claim that foo is still valid (since the changes to them\n> were never committed) but we've already unlinked foo's physical file,\n> and we can't get it back. Solution: only allow DROP TABLE outside\n> BEGIN, so that the user can't try to change his mind later.\n\nWhat if table was created inside BEGIN/END?\nAny reason to disallow DROP of local tables?\n\nVadim\n",
"msg_date": "Mon, 06 Sep 1999 09:09:47 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> > Cant you just rename to a unique name, maybee in another directory,\n> \n> Not if other backends are also accessing the table. Remember that to\n> make this really work, the DROP would have to be invisible to other\n> backends until commit.\n> \n\nIs that really needed? Remember that table's creation is not transparent\nto other users - when someone attempts to create a table, others,\nthough can't see that table, cannot create a table with the same name.\nSo you can simply issue a draconian-level lock on a table being deleted.\nBut in any case it would need postponing real killing until transaction\ncommit.\n\n> The amount of work needed seems vastly more than the feature is worth...\n\nI personally have a project in development which extensively uses\nthat feature. It is meant to be database restructuring 'on the fly'.\nIf you break that, it would be a big drawback to me. And I assume, not\nonly to me, because it would break an idea of trasaction itself. \nDatabase restructuring by software, not by hand, will be seriously\ndamaged.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 06 Sep 1999 16:51:46 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Vadim Mikheev <[email protected]> writes:\n>> and we can't get it back. Solution: only allow DROP TABLE outside\n>> BEGIN, so that the user can't try to change his mind later.\n\n> What if table was created inside BEGIN/END?\n> Any reason to disallow DROP of local tables?\n\nNone, and in fact the code does allow that case, but I forgot to\nmention it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 1999 10:14:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "Leon <[email protected]> writes:\n> Tom Lane wrote:\n>>>> Cant you just rename to a unique name, maybee in another directory,\n>> \n>> Not if other backends are also accessing the table. Remember that to\n>> make this really work, the DROP would have to be invisible to other\n>> backends until commit.\n\n> Is that really needed? Remember that table's creation is not transparent\n> to other users - when someone attempts to create a table, others,\n> though can't see that table, cannot create a table with the same name.\n> So you can simply issue a draconian-level lock on a table being deleted.\n\nThat's a good point --- we acquire exclusive lock anyway on a table\nabout to be deleted, so just holding that lock till end of transaction\nshould prevent other backends from trying to touch the table.\n\nSo someone could probably cobble together a real solution consisting of\nlocking the table and renaming the files to unique temp names at DROP\ntime, then either completing the drop and unlinking the files at commit\ntime, or re-renaming them at abort.\n\nThere are a bunch of subtleties to be dealt with though. A couple of\ngotchas I can think of offhand: better flush dirty buffers for the\ntarget rel before doing the rename, else another backend might try to\ndo it between DROP and COMMIT, and write to the wrong file name. The\nrenaming at abort time has to be done in the right order relative to\ndropping tables created during the xact, or else BEGIN; DROP TABLE foo;\nCREATE TABLE foo; ABORT won't work right. Currently, an attempt to\nlock a table always involves making a relcache entry first, and the\nrelcache will try to open the underlying files as soon as you do that,\nso other backends trying to touch the dying table for the first time\nwould get unexpected error messages. Probably a few other things.\n\nIn short, a lot of work for a very marginal feature. How many other\nDBMSes permit DROP TABLE to be rolled back? How many users care?\n\n> I personally have a project in development which extensively uses\n> that feature. It is meant to be database restructuring 'on the fly'.\n\nWhat do you mean by \"that feature\"? The ability to abort a DROP TABLE?\nWe have no such feature, and never have. If you just mean that you\nwant to issue DROP TABLE inside BEGIN/END, and you don't care about\nproblems that ensue if the transaction is aborted, then we could\nconsider downgrading the error report to a notice as I suggested\nyesterday.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 1999 10:44:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> In short, a lot of work for a very marginal feature. How many other\n> DBMSes permit DROP TABLE to be rolled back? How many users care?\n> \n\nDon't know. But here is the idea: drop table rollback is needed in\nautomation of DB restructuring. There is no need of that in web or\n'custom' applications for that feature. It is only needed in complex,\ntwo-stage applications, when first stage manages the underlying DB\nstructure for the second. In other words, in big projects. If you\nare not very ambitious, you can get rid of that complication. I \npersonally can live without it, though with some redesign of my\nproject, and there will be no restructuring 'on the fly'.\n\n> > I personally have a project in development which extensively uses\n> > that feature. It is meant to be database restructuring 'on the fly'.\n> \n> What do you mean by \"that feature\"? The ability to abort a DROP TABLE?\n> We have no such feature, and never have. \n\nSadly I always supposed that rollback can work wonders and resurrect\na table killed in transaction. I was so sure it was so that no testing\nhad been done. It isn't mentioned in docs.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 06 Sep 1999 21:02:07 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Here's some comments from a naive viewpoint (moving from InterBase to\nPostgreSQL):\n\nTom Lane wrote:\n\n> So someone could probably cobble together a real solution consisting of\n> locking the table and renaming the files to unique temp names at DROP\n> time, then either completing the drop and unlinking the files at commit\n> time, or re-renaming them at abort.\n\nWhy should all of this renaming stuff be necessary? I would expect all\nentities CREATEd in a transaction to live entirely in cache (or a temp file),\nand all DROPped entities to remain where they are until COMMIT time, at which\npoint DROPs should unlink and then CREATEs should create. Is this too hard?\n\n> In short, a lot of work for a very marginal feature. How many other\n> DBMSes permit DROP TABLE to be rolled back? How many users care?\n\nSybase documentation explicitly allows most forms of CREATE and DROP in\ntransactions. InterBase (which uses a versioning system much like\nPostgreSQL) definitely handles CREATE/DROP in transactions correctly, but you\ncan't access a newly created table until after COMMIT.\n\nWhat happens, in the current system if you want to make metadata changes in a\ntransaction and you make a typo that requires ABORTing the changes? If you\nhave to make all metadata changes outside of transactions, you lose safety at\nthe most fragile and critical level.\n\n",
"msg_date": "Mon, 06 Sep 1999 12:10:15 -0500",
"msg_from": "Evan Simpson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Leon wrote:\n> \n> Tom Lane wrote:\n> \n> >\n> > In short, a lot of work for a very marginal feature. How many other\n> > DBMSes permit DROP TABLE to be rolled back? How many users care?\n> >\n> \n> Don't know. But here is the idea: drop table rollback is needed in\n> automation of DB restructuring.\n\nActually the underlying mechanics could be used for other things too,\nlike:\n\nALTER TABLE DROP COLUMN colname, or even changing the type of column,\nsay \nfrom int4 -> int8 -> float -> char -> varchar -> text ?\n\nI know that Oracle at least allows the latter but I'm not sure how \nit does that\n\n-------------\nHannu\n",
"msg_date": "Mon, 06 Sep 1999 21:29:20 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Monday, September 06, 1999 11:44 PM\n> To: Leon\n> Cc: Michael Simms; [email protected]\n> Subject: Re: [HACKERS] DROP TABLE inside transaction block\n>\n>\n> Leon <[email protected]> writes:\n> > Tom Lane wrote:\n> >>>> Cant you just rename to a unique name, maybee in another directory,\n> >>\n> >> Not if other backends are also accessing the table. Remember that to\n> >> make this really work, the DROP would have to be invisible to other\n> >> backends until commit.\n>\n> > Is that really needed? Remember that table's creation is not transparent\n> > to other users - when someone attempts to create a table, others,\n> > though can't see that table, cannot create a table with the same name.\n> > So you can simply issue a draconian-level lock on a table being deleted.\n>\n> That's a good point --- we acquire exclusive lock anyway on a table\n> about to be deleted, so just holding that lock till end of transaction\n> should prevent other backends from trying to touch the table.\n>\n\nThat reminds me.\nDROP TABLE doesn't hold exlusive lock till end of transaction.\nUnlockRelation() seems too early.\nHere is a patch.\n\nSeems ALTER TABLE doesn't acquire any lock for the target\nrelation. It's OK ?\n\nregards.\n\nHiroshi Inoue\[email protected]\n\n*** catalog/heap.c.orig\tTue Sep 7 08:52:04 1999\n--- catalog/heap.c\tTue Sep 7 08:58:16 1999\n***************\n*** 1330,1336 ****\n\n \trel->rd_nonameunlinked = TRUE;\n\n- \tUnlockRelation(rel, AccessExclusiveLock);\n\n \theap_close(rel);\n\n--- 1330,1335 ----\n\n",
"msg_date": "Tue, 7 Sep 1999 10:13:05 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> There are a bunch of subtleties to be dealt with though. A couple of\n> gotchas I can think of offhand: better flush dirty buffers for the\n> target rel before doing the rename, else another backend might try to\n> do it between DROP and COMMIT, and write to the wrong file name. The\n\nBTW, I'm going to use relation oid as relation file name for WAL:\nit would be bad to store relname in log records for each updated\ntuple and it would be hard to scan pg_class to get relname from\nreloid in recovery.\n\n> renaming at abort time has to be done in the right order relative to\n> dropping tables created during the xact, or else BEGIN; DROP TABLE foo;\n> CREATE TABLE foo; ABORT won't work right. Currently, an attempt to\n> lock a table always involves making a relcache entry first, and the\n> relcache will try to open the underlying files as soon as you do that,\n> so other backends trying to touch the dying table for the first time\n> would get unexpected error messages. Probably a few other things.\n> \n> In short, a lot of work for a very marginal feature. How many other\n> DBMSes permit DROP TABLE to be rolled back? How many users care?\n\nOracle auto-commits current in-progress transaction before\nexecution of any DDL statement and executes such statements in\nseparate transaction. \n\nVadim\n",
"msg_date": "Tue, 07 Sep 1999 09:44:19 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "> > renaming at abort time has to be done in the right order relative to\n> > dropping tables created during the xact, or else BEGIN; DROP TABLE foo;\n> > CREATE TABLE foo; ABORT won't work right. Currently, an attempt to\n> > lock a table always involves making a relcache entry first, and the\n> > relcache will try to open the underlying files as soon as you do that,\n> > so other backends trying to touch the dying table for the first time\n> > would get unexpected error messages. Probably a few other things.\n> > \n> > In short, a lot of work for a very marginal feature. How many other\n> > DBMSes permit DROP TABLE to be rolled back? How many users care?\n> \n> Oracle auto-commits current in-progress transaction before\n> execution of any DDL statement and executes such statements in\n> separate transaction. \n\nThat's cheating!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Sep 1999 22:53:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> That's a good point --- we acquire exclusive lock anyway on a table\n>> about to be deleted, so just holding that lock till end of transaction\n>> should prevent other backends from trying to touch the table.\n\n> That reminds me.\n> DROP TABLE doesn't hold exlusive lock till end of transaction.\n> UnlockRelation() seems too early.\n\nI wondered about that too --- but I didn't change it because I wasn't\nsure it was wrong. Vadim, what do you think?\n\n> Seems ALTER TABLE doesn't acquire any lock for the target\n> relation. It's OK ?\n\nNone? Yipes. Seems to me it should *definitely* be grabbing\nAccessExclusiveLock.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 1999 23:00:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> That's a good point --- we acquire exclusive lock anyway on a table\n> >> about to be deleted, so just holding that lock till end of transaction\n> >> should prevent other backends from trying to touch the table.\n> \n> > That reminds me.\n> > DROP TABLE doesn't hold exlusive lock till end of transaction.\n> > UnlockRelation() seems too early.\n> \n> I wondered about that too --- but I didn't change it because I wasn't\n> sure it was wrong. Vadim, what do you think?\n\nI remember that Hiroshi reported about this already and\nseems we decided to remove UnlockRelation from heap_destroy_with_catalog(),\nbut forgot to do it?\n\n> \n> > Seems ALTER TABLE doesn't acquire any lock for the target\n> > relation. It's OK ?\n> \n> None? Yipes. Seems to me it should *definitely* be grabbing\n> AccessExclusiveLock.\n\nYes.\n\nVadim\n",
"msg_date": "Tue, 07 Sep 1999 11:09:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": ">> \n>> Oracle auto-commits current in-progress transaction before\n>> execution of any DDL statement and executes such statements in\n>> separate transaction. \n>\n>That's cheating!\n>\n\nDec (Oracle) Rdb cheats by locking a tables meta-data as soon as any user\naccesses it, so that 'alter/drop table' will not run while that user is\nattached. But is does support meta-data changes inside transactions\n(assuming no-else who is currently connected has ever read that particular\nmeta-data). It is nice being able to rollback 'alter table' statements,\neven under these strong restrictions.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 07 Sep 1999 13:19:24 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > renaming at abort time has to be done in the right order relative to\n> > > dropping tables created during the xact, or else BEGIN; DROP TABLE foo;\n> > > CREATE TABLE foo; ABORT won't work right. Currently, an attempt to\n> > > lock a table always involves making a relcache entry first, and the\n> > > relcache will try to open the underlying files as soon as you do that,\n> > > so other backends trying to touch the dying table for the first time\n> > > would get unexpected error messages. Probably a few other things.\n> > >\n> > > In short, a lot of work for a very marginal feature. How many other\n> > > DBMSes permit DROP TABLE to be rolled back? How many users care?\n> >\n> > Oracle auto-commits current in-progress transaction before\n> > execution of any DDL statement and executes such statements in\n> > separate transaction.\n> \n> That's cheating!\n\nMaybe :))\nBut sql3-12aug93 says:\n\n 4.41 SQL-transactions\n\n\n An SQL-transaction (transaction) is a sequence of executions of \n SQL-statements that is atomic with respect to recovery. These oper-\n ations are performed by one or more compilation units and <module>s\n or by the direct invocation of SQL.\n\n It is implementation-defined whether or not the non-dynamic or \n ^^^^^^^^^^^^^^^^^^^^^^\n dynamic execution of an SQL-data statement or the execution of\n an <SQL dynamic data statement> is permitted to occur within the\n same SQL-transaction as the non-dynamic or dynamic execution of\n an SQL-schema statement. If it does occur, then the effect on any\n ^^^^^^^^^^^^^^^^^^^^\n open cursor, prepared dynamic statement, or deferred constraint\n is implementation-defined. There may be additional implementation-\n defined restrictions, requirements, and conditions. If any such\n restrictions, requirements, or conditions are violated, then an\n implementation-defined exception condition or a completion con-\n dition warning with an implementation-defined subclass code is\n raised.\n\nVadim\n",
"msg_date": "Tue, 07 Sep 1999 12:13:39 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "\n\nTom Lane ha scritto:\n\n> Pursuant to a phone conversation I had with Bruce, I added code this\n> morning to reject DROP TABLE or DROP INDEX inside a transaction block;\n> that is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\n> for rejecting this case is that we do the wrong thing if the transaction\n> is later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\n> tables will claim that foo is still valid (since the changes to them\n> were never committed) but we've already unlinked foo's physical file,\n> and we can't get it back. Solution: only allow DROP TABLE outside\n> BEGIN, so that the user can't try to change his mind later.\n>\n> However, on second thought I wonder if this cure is worse than the\n> disease. Will it be unreasonably hard to drop tables using client\n> interfaces that like to wrap everything in BEGIN/END? Plugging an\n> obscure hole might not be worth that.\n>\n> A possible compromise is not to error out, but just to issue a NOTICE\n> along the lines of \"DROP TABLE is not undoable, so don't even think of\n> trying to abort now...\"\n>\n> (Of course, what would be really nice is if it just worked, but I don't\n> see any way to make that happen without major changes. Simply\n> postponing the unlink to end of transaction isn't workable; consider\n> BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n>\n> Any thoughts? Will there indeed be a problem with JDBC or ODBC if we\n> leave this error check in place?\n>\n> regards, tom lane\n>\n> ************\n>\n> ************\n\nSeems a good solution. I have an old note about this problem.\nWhat about to reject also the following commands inside transactions?\n\n\n* BUGS: There are some commands that doesn't work properly\n inside transactions. Users should NOT use the following\n statements inside transactions:\n\n - DROP TABLE -- in case of ROLLBACK only table structure\n will be recovered, data will be\nlost.\n - CREATE VIEWS -- the behavior of the backend is unpredictable.\n - ALTER TABLE -- the behavior of the backend is unpredictable.\n - CREATE DATABASE -- in case of ROLLBACK will be removed references\n from \"pg_database\" but directory\n$PGDATA/databasename will not be removed.\n\nJos�\n\n\n\n",
"msg_date": "Tue, 07 Sep 1999 14:55:56 +0200",
"msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> Seems a good solution. I have an old note about this problem.\n> What about to reject also the following commands inside transactions?\n\n> * BUGS: There are some commands that doesn't work properly\n> inside transactions. Users should NOT use the following\n> statements inside transactions:\n\n> - DROP TABLE -- in case of ROLLBACK only table structure\n> will be recovered, data will be\n> lost.\n> - CREATE VIEWS -- the behavior of the backend is unpredictable.\n> - ALTER TABLE -- the behavior of the backend is unpredictable.\n> - CREATE DATABASE -- in case of ROLLBACK will be removed references\n> from \"pg_database\" but directory\n> $PGDATA/databasename will not be removed.\n\nCREATE DATABASE (and presumably also DROP DATABASE) probably should\nrefuse to run inside a transaction.\n\nI see no good reason that CREATE VIEW or ALTER TABLE should not work\ncleanly in a transaction. It may be that they have bugs interfering\nwith that (for example, Hiroshi just pointed out that ALTER TABLE\nseems not to be locking the table, which is surely bogus).\n\nThe main reason that DROP TABLE is an issue is that it alters the\nunderlying Unix file structure, which means we can't just rely on the\nnormal transaction mechanisms of committed/uncommitted tuples to handle\nrollback. ALTER TABLE doesn't do anything except change tuples.\nCREATE VIEW is a CREATE TABLE plus tuple changes (and while CREATE TABLE\ndoes alter the file structure by making a new file, we have extra code\nin there to handle rolling it back). So it seems like they oughta work.\n\nRENAME TABLE is another thing that can't currently be rolled back,\nbecause it renames the underlying Unix files and there's no mechanism\nto undo that. (RENAME TABLE is missing a lock too...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 1999 09:53:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Tuesday, September 07, 1999 10:54 PM\n> To: Jos�Soares\n> Cc: [email protected]\n> Subject: Re: [HACKERS] DROP TABLE inside transaction block\n>\n>\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > Seems a good solution. I have an old note about this problem.\n> > What about to reject also the following commands inside transactions?\n>\n> > * BUGS: There are some commands that doesn't work properly\n> > inside transactions. Users should NOT use the following\n> > statements inside transactions:\n>\n> > - DROP TABLE -- in case of ROLLBACK only table structure\n> > will be recovered, data will be\n> > lost.\n> > - CREATE VIEWS -- the behavior of the backend is unpredictable.\n> > - ALTER TABLE -- the behavior of the backend is unpredictable.\n> > - CREATE DATABASE -- in case of ROLLBACK will be removed references\n> > from \"pg_database\" but directory\n> > $PGDATA/databasename will not be removed.\n>\n> CREATE DATABASE (and presumably also DROP DATABASE) probably should\n> refuse to run inside a transaction.\n>\n\nProbably VACUUM should also refuse to run inside transactions.\nVACUUM has a phase like commit in the middle of execution.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Wed, 8 Sep 1999 09:29:44 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "\nMy guess is that your new open routines with locking have fixed this.\n\n\n> Pursuant to a phone conversation I had with Bruce, I added code this\n> morning to reject DROP TABLE or DROP INDEX inside a transaction block;\n> that is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\n> for rejecting this case is that we do the wrong thing if the transaction\n> is later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\n> tables will claim that foo is still valid (since the changes to them\n> were never committed) but we've already unlinked foo's physical file,\n> and we can't get it back. Solution: only allow DROP TABLE outside\n> BEGIN, so that the user can't try to change his mind later.\n> \n> However, on second thought I wonder if this cure is worse than the\n> disease. Will it be unreasonably hard to drop tables using client\n> interfaces that like to wrap everything in BEGIN/END? Plugging an\n> obscure hole might not be worth that.\n> \n> A possible compromise is not to error out, but just to issue a NOTICE\n> along the lines of \"DROP TABLE is not undoable, so don't even think of\n> trying to abort now...\"\n> \n> (Of course, what would be really nice is if it just worked, but I don't\n> see any way to make that happen without major changes. Simply\n> postponing the unlink to end of transaction isn't workable; consider\n> BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n> \n> Any thoughts? Will there indeed be a problem with JDBC or ODBC if we\n> leave this error check in place?\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:08:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "\nAny comment on this?\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> \n> Tom Lane ha scritto:\n> \n> > Pursuant to a phone conversation I had with Bruce, I added code this\n> > morning to reject DROP TABLE or DROP INDEX inside a transaction block;\n> > that is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\n> > for rejecting this case is that we do the wrong thing if the transaction\n> > is later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\n> > tables will claim that foo is still valid (since the changes to them\n> > were never committed) but we've already unlinked foo's physical file,\n> > and we can't get it back. Solution: only allow DROP TABLE outside\n> > BEGIN, so that the user can't try to change his mind later.\n> >\n> > However, on second thought I wonder if this cure is worse than the\n> > disease. Will it be unreasonably hard to drop tables using client\n> > interfaces that like to wrap everything in BEGIN/END? Plugging an\n> > obscure hole might not be worth that.\n> >\n> > A possible compromise is not to error out, but just to issue a NOTICE\n> > along the lines of \"DROP TABLE is not undoable, so don't even think of\n> > trying to abort now...\"\n> >\n> > (Of course, what would be really nice is if it just worked, but I don't\n> > see any way to make that happen without major changes. Simply\n> > postponing the unlink to end of transaction isn't workable; consider\n> > BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n> >\n> > Any thoughts? Will there indeed be a problem with JDBC or ODBC if we\n> > leave this error check in place?\n> >\n> > regards, tom lane\n> >\n> > ************\n> >\n> > ************\n> \n> Seems a good solution. I have an old note about this problem.\n> What about to reject also the following commands inside transactions?\n> \n> \n> * BUGS: There are some commands that doesn't work properly\n> inside transactions. Users should NOT use the following\n> statements inside transactions:\n> \n> - DROP TABLE -- in case of ROLLBACK only table structure\n> will be recovered, data will be\n> lost.\n> - CREATE VIEWS -- the behavior of the backend is unpredictable.\n> - ALTER TABLE -- the behavior of the backend is unpredictable.\n> - CREATE DATABASE -- in case of ROLLBACK will be removed references\n> from \"pg_database\" but directory\n> $PGDATA/databasename will not be removed.\n> \n> Jos_\n> \n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:08:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "\nSeems like good comments on these items. Anything for TODO list here?\n\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > Seems a good solution. I have an old note about this problem.\n> > What about to reject also the following commands inside transactions?\n> \n> > * BUGS: There are some commands that doesn't work properly\n> > inside transactions. Users should NOT use the following\n> > statements inside transactions:\n> \n> > - DROP TABLE -- in case of ROLLBACK only table structure\n> > will be recovered, data will be\n> > lost.\n> > - CREATE VIEWS -- the behavior of the backend is unpredictable.\n> > - ALTER TABLE -- the behavior of the backend is unpredictable.\n> > - CREATE DATABASE -- in case of ROLLBACK will be removed references\n> > from \"pg_database\" but directory\n> > $PGDATA/databasename will not be removed.\n> \n> CREATE DATABASE (and presumably also DROP DATABASE) probably should\n> refuse to run inside a transaction.\n> \n> I see no good reason that CREATE VIEW or ALTER TABLE should not work\n> cleanly in a transaction. It may be that they have bugs interfering\n> with that (for example, Hiroshi just pointed out that ALTER TABLE\n> seems not to be locking the table, which is surely bogus).\n> \n> The main reason that DROP TABLE is an issue is that it alters the\n> underlying Unix file structure, which means we can't just rely on the\n> normal transaction mechanisms of committed/uncommitted tuples to handle\n> rollback. ALTER TABLE doesn't do anything except change tuples.\n> CREATE VIEW is a CREATE TABLE plus tuple changes (and while CREATE TABLE\n> does alter the file structure by making a new file, we have extra code\n> in there to handle rolling it back). So it seems like they oughta work.\n> \n> RENAME TABLE is another thing that can't currently be rolled back,\n> because it renames the underlying Unix files and there's no mechanism\n> to undo that. (RENAME TABLE is missing a lock too...)\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:09:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Seems like good comments on these items. Anything for TODO list here?\n\nActually, the current state of play is that I reduced the ERROR messages\nto NOTICEs in DROP TABLE and DROP INDEX (\"NOTICE: DROP TABLE cannot be\nrolled back, so don't abort now\"), since there seemed to be some\nunhappiness about making them hard errors. I also put similar messages\ninto RENAME TABLE and TRUNCATE TABLE.\n\nI have a personal TODO item to go and insert some more checks: per the\ndiscussions so far, CREATE/DROP DATABASE probably need similar messages,\nand I think we need to make VACUUM refuse to run inside a transaction\nblock at all (since its internal commits will not do the intended thing\nif you do BEGIN; VACUUM). Also on my list is to investigate these\nreports that CREATE VIEW and ALTER TABLE don't roll back cleanly ---\nthere may be bugs lurking there. If you want to add those to the\npublic list, go ahead.\n\n\t\t\tregards, tom lane\n\n\n>> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n>>>> Seems a good solution. I have an old note about this problem.\n>>>> What about to reject also the following commands inside transactions?\n>> \n>>>> * BUGS: There are some commands that doesn't work properly\n>>>> inside transactions. Users should NOT use the following\n>>>> statements inside transactions:\n>> \n>>>> - DROP TABLE -- in case of ROLLBACK only table structure\n>>>> will be recovered, data will be\n>>>> lost.\n>>>> - CREATE VIEWS -- the behavior of the backend is unpredictable.\n>>>> - ALTER TABLE -- the behavior of the backend is unpredictable.\n>>>> - CREATE DATABASE -- in case of ROLLBACK will be removed references\n>>>> from \"pg_database\" but directory\n>>>> $PGDATA/databasename will not be removed.\n>> \n>> CREATE DATABASE (and presumably also DROP DATABASE) probably should\n>> refuse to run inside a transaction.\n>> \n>> I see no good reason that CREATE VIEW or ALTER TABLE should not work\n>> cleanly in a transaction. It may be that they have bugs interfering\n>> with that (for example, Hiroshi just pointed out that ALTER TABLE\n>> seems not to be locking the table, which is surely bogus).\n>> \n>> The main reason that DROP TABLE is an issue is that it alters the\n>> underlying Unix file structure, which means we can't just rely on the\n>> normal transaction mechanisms of committed/uncommitted tuples to handle\n>> rollback. ALTER TABLE doesn't do anything except change tuples.\n>> CREATE VIEW is a CREATE TABLE plus tuple changes (and while CREATE TABLE\n>> does alter the file structure by making a new file, we have extra code\n>> in there to handle rolling it back). So it seems like they oughta work.\n>> \n>> RENAME TABLE is another thing that can't currently be rolled back,\n>> because it renames the underlying Unix files and there's no mechanism\n>> to undo that. (RENAME TABLE is missing a lock too...)\n",
"msg_date": "Tue, 28 Sep 1999 09:40:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block "
},
{
"msg_contents": "Tom Lane is working on this, and it should be improved for 6.6.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> \n> Tom Lane ha scritto:\n> \n> > Pursuant to a phone conversation I had with Bruce, I added code this\n> > morning to reject DROP TABLE or DROP INDEX inside a transaction block;\n> > that is, you can't do BEGIN; DROP TABLE foo; END anymore. The reason\n> > for rejecting this case is that we do the wrong thing if the transaction\n> > is later aborted. Following BEGIN; DROP TABLE foo; ABORT, the system\n> > tables will claim that foo is still valid (since the changes to them\n> > were never committed) but we've already unlinked foo's physical file,\n> > and we can't get it back. Solution: only allow DROP TABLE outside\n> > BEGIN, so that the user can't try to change his mind later.\n> >\n> > However, on second thought I wonder if this cure is worse than the\n> > disease. Will it be unreasonably hard to drop tables using client\n> > interfaces that like to wrap everything in BEGIN/END? Plugging an\n> > obscure hole might not be worth that.\n> >\n> > A possible compromise is not to error out, but just to issue a NOTICE\n> > along the lines of \"DROP TABLE is not undoable, so don't even think of\n> > trying to abort now...\"\n> >\n> > (Of course, what would be really nice is if it just worked, but I don't\n> > see any way to make that happen without major changes. Simply\n> > postponing the unlink to end of transaction isn't workable; consider\n> > BEGIN; DROP TABLE foo; CREATE TABLE foo; ...)\n> >\n> > Any thoughts? Will there indeed be a problem with JDBC or ODBC if we\n> > leave this error check in place?\n> >\n> > regards, tom lane\n> >\n> > ************\n> >\n> > ************\n> \n> Seems a good solution. I have an old note about this problem.\n> What about to reject also the following commands inside transactions?\n> \n> \n> * BUGS: There are some commands that doesn't work properly\n> inside transactions. Users should NOT use the following\n> statements inside transactions:\n> \n> - DROP TABLE -- in case of ROLLBACK only table structure\n> will be recovered, data will be\n> lost.\n> - CREATE VIEWS -- the behavior of the backend is unpredictable.\n> - ALTER TABLE -- the behavior of the backend is unpredictable.\n> - CREATE DATABASE -- in case of ROLLBACK will be removed references\n> from \"pg_database\" but directory\n> $PGDATA/databasename will not be removed.\n> \n> Jos_\n> \n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 11:26:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
}
] |
[
{
"msg_contents": "This is a warning an a request for change in pg_dumpall behaviour.\n\nOne of my co-workers accidentally used pg_dumpall instead of pg_dump \ngiving it also a dbname argument. According to man pg_dumpall:\n\n pg_dumpall takes all pg_dump options, but -f and dbname should \n not be used.\n\nthe results of using dbname are quite bizarre - namely it dumps \nstatements for creating all existing databases, but inside them \nit puts the contents of the database given by dbname !\n\nAs this feature seems to be totally useless, I suggest that \npg_dumpall be modified to produce an error when given dbname \nargument instead of silently producing mostly useless db dump.\n\nIn our case this went unnoticed until he tried to recreate his \ndatabase by doing 'psql dbname <dumpfile', which resulted in \ndestroying pg_user table and messing up many other databses :(\n\nIf this can't be changed, at least the behaviour should be \ndocumented more thoroughly in big red letters.\n\n---------\nHannu\n",
"msg_date": "Mon, 06 Sep 1999 20:21:02 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Troubles from using pg_dumpall with dbname"
}
] |
[
{
"msg_contents": "\nI am using the posgresql 6.5.1.\n\nI have begin a transaction and insert a lot of tuples in the \ntransaction. The simplified code is as following:\n\n----------------------------------\n res = PQexec(conn,\"BEGIN\");\n PQclear(res);\n\n PQexec(conn, \"INSERT INTO qms_table (idr1, idr2, sequence) VALUES ('chu1', 'wind1', 0)\" );\n PQclear(res);\n\n PQexec(conn, \"INSERT INTO qms_table (idr1, idr2, sequence) VALUES ('chu1', 'wind1', 1)\" );\n PQclear(res);\n\n ....\n\n res = PQexec(conn,\"END\");\n PQclear(res);\n-------------------------------\n\n But I got a message as:\n\n Backend message type 0x45 arrived while idle\n\n When inserting the third or forth tuples, and the backend process exits.\n\n Does anybody know what the message type 0x45 means? What document can I find\nthe related information? And does any body know what may be the reason casued the\nproblem?\n Thanks\n",
"msg_date": "Tue, 07 Sep 1999 14:32:03 +0800",
"msg_from": "Yann-Ju Chu <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem about message type 0x45"
},
{
"msg_contents": "Yann-Ju Chu <[email protected]> writes:\n> But I got a message as:\n> Backend message type 0x45 arrived while idle\n> When inserting the third or forth tuples, and the backend process\n> exits. Does anybody know what the message type 0x45 means? What\n> document can I find the related information? And does any body\n> know what may be the reason casued the problem?\n\n0x45 = 'E' would be an error message. If you look in the postmaster\nlog file you should see the error being logged. My guess is that the\nbackend is crashing, and is managing to output an error message just\nbefore it goes down; but libpq isn't expecting any error message and\nfails to cope.\n\nThere's not enough info here to figure out why the backend is crashing.\nThe error message might help...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 1999 10:02:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problem about message type 0x45 "
}
] |
[
{
"msg_contents": ">> In short, a lot of work for a very marginal feature. How many other\n>> DBMSes permit DROP TABLE to be rolled back? How many users care?\n>\n> Oracle auto-commits current in-progress transaction before\n> execution of any DDL statement and executes such statements in\n> separate transaction.\n\nInformix does allow rollback of ddl statements.\n\nAndreas\n",
"msg_date": "Tue, 07 Sep 1999 09:26:24 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE inside transaction block"
}
] |
[
{
"msg_contents": "Thanks a lot for your explanations.\nWe have started working with the snapshot and have found a problem with\nthe selectivity estimation of join clauses, probably you just know of\nthis.\nWhen the join is between attnos < 0 (such as oids), the selectivity is\nestimated as 0.5 (leading to very bad size estimates), since this code\nin function compute_clause_selec (clausesel.c):\n\n if (relid1 > 0 && relid2 > 0 && attno1 > 0 && attno2 > 0)\n ...\n else\n s1 = (Cost) (0.5);\n\nSo what is the aim of the last two and conditions?\n\nBest regards\n\nRoberto Cornacchia\nAndrea Ghidini\n",
"msg_date": "Tue, 07 Sep 1999 21:15:37 +0200",
"msg_from": "Roberto Cornacchia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] optimizer pruning problem"
},
{
"msg_contents": "> When the join is between attnos < 0 (such as oids), the selectivity is\n> estimated as 0.5 (leading to very bad size estimates), since this code\n> in function compute_clause_selec (clausesel.c):\n\n> if (relid1 > 0 && relid2 > 0 && attno1 > 0 && attno2 > 0)\n> ...\n> else\n> s1 = (Cost) (0.5);\n\n> So what is the aim of the last two and conditions?\n\nThat's a bug, I guess. -1 is used to signal \"couldn't find the\nattribute\", but there's no real need to check *both* relid and attno\nto determine that. It should consider positive relid and negative\nattno to be valid.\n\nSince vacuum doesn't record statistics for the system attributes,\nthere probably also needs to be a hack in the code that looks in\npg_statistic so that it will produce reasonable estimates. We\nshould assume that OID has perfect disbursion, for sure. I don't\nknow if we can assume anything much about the other sys attributes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 1999 17:26:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] optimizer pruning problem "
},
{
"msg_contents": "> \n> > When the join is between attnos < 0 (such as oids), the selectivity is\n> > estimated as 0.5 (leading to very bad size estimates), since this code\n> > in function compute_clause_selec (clausesel.c):\n> \n> > if (relid1 > 0 && relid2 > 0 && attno1 > 0 && attno2 > 0)\n> > ...\n> > else\n> > s1 = (Cost) (0.5);\n> \n> > So what is the aim of the last two and conditions?\n> \n> That's a bug, I guess. -1 is used to signal \"couldn't find the\n> attribute\", but there's no real need to check *both* relid and attno\n> to determine that. It should consider positive relid and negative\n> attno to be valid.\n>\n> Since vacuum doesn't record statistics for the system attributes,\n> there probably also needs to be a hack in the code that looks in\n> pg_statistic so that it will produce reasonable estimates. We\n> should assume that OID has perfect disbursion, for sure. I don't\n> know if we can assume anything much about the other sys attributes...\n>\n\nCTID has perfect disbursion too. The selectivity is necessary\nin order to compute rows of scan using TIDs,though we couldn't\nuse WHERE restriction on ctid now.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Wed, 8 Sep 1999 18:42:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] optimizer pruning problem "
},
{
"msg_contents": "We have finished our work on optimization of Top N queries (with clauses\nSTOP AFTER ... FOR EACH ... RANK BY ...) now it's time of validating it\nwith performance tests: since we are working on snapshots of the 6.6\nrelease (now we are using snapshot dated 9/13/99) we are afraid of\ninstability problems to affect the results. Could you give us any\nsuggestion about this? We are quite close to the degree day, so we have\nto optimize time usage... \nBTW, first results seem to be interesting.\n\nWe would ask you for a last thing.\nWe need to estimate the number of distinct values of an attribute. We\nthought 1/disbursion was the right solution, but the results were quite\nwrong:\nwith 100 distinct values of an attribute uniformly distribuited in a\nrelation of 10000 tuples, disbursion was estimated as 0.002275, giving\nus 440 distinct values. We have seen this disbursion estimates\nresult also in bad join selectivities estimate.\nCould this be due to a bad disbursion estimate or is our solution\ncompletely wrong?\nThanks a lot\n\nRoberto Cornacchia\nAndrea Ghidini\n",
"msg_date": "Thu, 07 Oct 1999 19:47:21 +0200",
"msg_from": "Roberto Cornacchia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Top N queries and disbursion"
},
{
"msg_contents": "Roberto Cornacchia <[email protected]> writes:\n> ... since we are working on snapshots of the 6.6\n> release (now we are using snapshot dated 9/13/99) we are afraid of\n> instability problems to affect the results. Could you give us any\n> suggestion about this? We are quite close to the degree day, so we have\n> to optimize time usage... \n\nIf you don't want to spend time tracking development changes then you\nprobably ought to stick with the snapshot you have. I don't see any\nreason that you should try to track changes right now...\n\n\n> We need to estimate the number of distinct values of an attribute. We\n> thought 1/disbursion was the right solution, but the results were quite\n> wrong:\n\nNo, it's certainly not the right thing. To my understanding, disbursion\nis a measure of the frequency of the most common value of an attribute;\nbut that tells you very little about how many other values there are.\n1/disbursion is a lower bound on the number of values, but it wouldn't\nbe a good estimate unless you had reason to think that the values were\npretty evenly distributed. There could be a *lot* of very-infrequent\nvalues.\n\n> with 100 distinct values of an attribute uniformly distribuited in a\n> relation of 10000 tuples, disbursion was estimated as 0.002275, giving\n> us 440 distinct values.\n\nThis is an illustration of the fact that Postgres' disbursion-estimator\nis pretty bad :-(. It usually underestimates the frequency of the most\ncommon value, unless the most common value is really frequent\n(probability > 0.2 or so). I've been trying to think of a more accurate\nway of figuring the statistic that wouldn't be unreasonably slow.\nOr, perhaps, we should forget all about disbursion and adopt some other\nstatistic(s).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Oct 1999 19:16:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Top N queries and disbursion "
},
{
"msg_contents": "> No, it's certainly not the right thing. To my understanding, disbursion\n> is a measure of the frequency of the most common value of an attribute;\n> but that tells you very little about how many other values there are.\n> 1/disbursion is a lower bound on the number of values, but it wouldn't\n> be a good estimate unless you had reason to think that the values were\n> pretty evenly distributed. There could be a *lot* of very-infrequent\n> values.\n> \n> > with 100 distinct values of an attribute uniformly distribuited in a\n> > relation of 10000 tuples, disbursion was estimated as 0.002275, giving\n> > us 440 distinct values.\n> \n> This is an illustration of the fact that Postgres' disbursion-estimator\n> is pretty bad :-(. It usually underestimates the frequency of the most\n> common value, unless the most common value is really frequent\n> (probability > 0.2 or so). I've been trying to think of a more accurate\n> way of figuring the statistic that wouldn't be unreasonably slow.\n> Or, perhaps, we should forget all about disbursion and adopt some other\n> statistic(s).\n\nYes, you have the crux of the issue. I wrote it because it was the best\nthing I could think of, but it is non-optimimal. Because all the\noptimal solutions seemed too slow to me, I couldn't think of a better\none.\n\nHere is my narrative on it from vacuum.c:\n\n---------------------------------------------------------------------------\n\n * We compute the column min, max, null and non-null counts.\n * Plus we attempt to find the count of the value that occurs most\n * frequently in each column\n * These figures are used to compute the selectivity of the column\n *\n * We use a three-bucked cache to get the most frequent item\n * The 'guess' buckets count hits. A cache miss causes guess1\n * to get the most hit 'guess' item in the most recent cycle, and\n * the new item goes into guess2. Whenever the total count of hits\n * of a 'guess' entry is larger than 'best', 'guess' becomes 'best'.\n *\n * This method works perfectly for columns with unique values, and columns\n * with only two unique values, plus nulls.\n *\n * It becomes less perfect as the number of unique values increases and\n * their distribution in the table becomes more random.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Oct 1999 19:53:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > No, it's certainly not the right thing. To my understanding, disbursion\n> > is a measure of the frequency of the most common value of an attribute;\n> > but that tells you very little about how many other values there are.\n> > 1/disbursion is a lower bound on the number of values, but it wouldn't\n> > be a good estimate unless you had reason to think that the values were\n> > pretty evenly distributed. There could be a *lot* of very-infrequent\n> > values.\n> >\n> > > with 100 distinct values of an attribute uniformly distribuited in a\n> > > relation of 10000 tuples, disbursion was estimated as 0.002275, giving\n> > > us 440 distinct values.\n> >\n> > This is an illustration of the fact that Postgres' disbursion-estimator\n> > is pretty bad :-(. It usually underestimates the frequency of the most\n> > common value, unless the most common value is really frequent\n> > (probability > 0.2 or so). I've been trying to think of a more accurate\n> > way of figuring the statistic that wouldn't be unreasonably slow.\n> > Or, perhaps, we should forget all about disbursion and adopt some other\n> > statistic(s).\n> \n> Yes, you have the crux of the issue. I wrote it because it was the best\n> thing I could think of, but it is non-optimimal. Because all the\n> optimal solutions seemed too slow to me, I couldn't think of a better\n> one.\n\nThank you, Tom and Bruce.\nThis is not a good news for us :-(. In any case, is 1/disbursion the\nbest estimate we can have by now, even if not optimal?\n\nRoberto Cornacchia\nAndrea Ghidini\n\n",
"msg_date": "Fri, 08 Oct 1999 15:18:42 +0200",
"msg_from": "Roberto Cornacchia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion"
},
{
"msg_contents": "Roberto Cornacchia <[email protected]> writes:\n>>>> 1/disbursion is a lower bound on the number of values, but it wouldn't\n>>>> be a good estimate unless you had reason to think that the values were\n>>>> pretty evenly distributed.\n\n> Thank you, Tom and Bruce.\n> This is not a good news for us :-(. In any case, is 1/disbursion the\n> best estimate we can have by now, even if not optimal?\n\nI don't have a better idea right at the moment. I'm open to the idea\nthat VACUUM should compute more or different statistics, though ---\nas long as it doesn't slow things down too much. (How much is too much\nwould probably depend on how much win the new stats would provide for\nnormal query-planning. For example, I'd resist making two passes over\nthe table during VACUUM ANALYZE, but I wouldn't rule it out completely;\nyou could sell me on it if the advantages were great enough.)\n\nHey, you guys are the researchers ... give us a better approach to\nkeeping table statistics ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 1999 10:24:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion "
},
{
"msg_contents": "> > Yes, you have the crux of the issue. I wrote it because it was the best\n> > thing I could think of, but it is non-optimimal. Because all the\n> > optimal solutions seemed too slow to me, I couldn't think of a better\n> > one.\n> \n> Thank you, Tom and Bruce.\n> This is not a good news for us :-(. In any case, is 1/disbursion the\n> best estimate we can have by now, even if not optimal?\n\nThat is the best one maintained by the database.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Oct 1999 12:19:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion"
},
{
"msg_contents": "> I don't have a better idea right at the moment. I'm open to the idea\n> that VACUUM should compute more or different statistics, though ---\n> as long as it doesn't slow things down too much. (How much is too much\n> would probably depend on how much win the new stats would provide for\n> normal query-planning. For example, I'd resist making two passes over\n> the table during VACUUM ANALYZE, but I wouldn't rule it out completely;\n> you could sell me on it if the advantages were great enough.)\n> \n> Hey, you guys are the researchers ... give us a better approach to\n> keeping table statistics ;-)\n\nYes, I am open to better ideas. The current code does 2-value columns,\nand unique columns perfectly. The other distributions it gets only\napproximate answers, but for a one-pass system, that's the best I could\ndo.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Oct 1999 12:33:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Top N queries and disbursion"
}
] |
[
{
"msg_contents": "\nThis is an error I am getting with 6.3.2\nunder freebsd 2.2.8\nMon Sep 6 16:44:48 BST 1999: NOTICE: SIAssignBackendId: discarding tag 2147483020\nMon Sep 6 16:44:48 BST 1999: FATAL 1: Backend cache invalidation initialization failed\n\n\nOur database is about 38megabyes. We do have about 20 - 30 simultaneous connections. Most accessing the same\ntables with transactions.\n\nWe see the above error quite often, and once it happens we have to restart postgress and our application.\nWe are accessing the db via jdbc.\n\nWe do not use any large objects (since they caused constant backend crashes for us)\n\n1) Can anyone explain what this error is, and if there is something we can do to work around it.\n\n2) is 6.5 stable enough for 23.5x7 production applications.\n \n3) are large objects stable in 6.5 (where I can store and access 20,000 of them regularily)\n\n4) if all my sql works in 6.3.2 will it need any changes to run under 6.5\n\nI am loosing the -- flush this free crap and move to oracle -- war here, so please help.\n",
"msg_date": "Tue, 07 Sep 1999 14:40:10 -0700",
"msg_from": "Jason Venner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stability questions RE 6.5 and 6.3.2 & 6.3.2 problems"
},
{
"msg_contents": "> This is an error I am getting with 6.3.2\n> under freebsd 2.2.8\n> Mon Sep 6 16:44:48 BST 1999: NOTICE: SIAssignBackendId: discarding tag 2147483020\n> Mon Sep 6 16:44:48 BST 1999: FATAL 1: Backend cache invalidation initialization failed\n> \n> \n> Our database is about 38megabyes. We do have about 20 - 30 simultaneous connections. Most accessing the same\n> tables with transactions.\n> \n> We see the above error quite often, and once it happens we have to restart postgress and our application.\n> We are accessing the db via jdbc.\n> \n> We do not use any large objects (since they caused constant backend crashes for us)\n> \n> 1) Can anyone explain what this error is, and if there is something we can do to work around it.\n\nIt is caused by the corrupted shared cache. No workaround exists for\n6.3.2 as far as I know.\n\n> 2) is 6.5 stable enough for 23.5x7 production applications.\n\n6.5 is much stable than pre 6.5 including 6.3.2. Even 128 simultaneous\nconnections are fine if properly configured.\n\n> 3) are large objects stable in 6.5 (where I can store and access 20,000 of them regularily)\n\nYes, except that 20,000 large objects would be slow (20,000 large\nobjects will create 40,000 files right under the database directory\nthat would make directory lookup slow). If you are serious about using\nmany large objects, probably I could supply patches to enhance the\nperformance as a workaround.\n\n> 4) if all my sql works in 6.3.2 will it need any changes to run under 6.5\n\nIt depends on your sql works. But basically you should be required\nvery few changes, I believe.\n\n> I am loosing the -- flush this free crap and move to oracle -- war here, so please help.\n\nI'm getting wining over commercial DBMSs in many projects since 6.5\nreleased. Thanks for the row-level locking and MVCC stuffs..\n---\nTatsuo Ishii\n",
"msg_date": "Wed, 08 Sep 1999 09:42:21 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Stability questions RE 6.5 and 6.3.2 & 6.3.2 problems "
},
{
"msg_contents": "Jason Venner <[email protected]> writes:\n> This is an error I am getting with 6.3.2\n> under freebsd 2.2.8\n> Mon Sep 6 16:44:48 BST 1999: NOTICE: SIAssignBackendId: discarding tag 2147483020\n\nI believe this is a symptom of running out of per-backend slots in the\nSI (shared inval) communication area. Theoretically that should not\nhappen until you try to start the 65th concurrent backend.\n\n> We do have about 20 - 30 simultaneous connections.\n\nCould it be getting up to a peak of 65 or more?\n\nYou could try increasing MaxBackendId in include/storage/sinvaladt.h,\nbut a much better answer is to update to 6.5, which supports easy\nalteration of the max number of backends (and doesn't die horribly\nwhen you hit the limit, either).\n\n> We do not use any large objects (since they caused constant backend\n> crashes for us)\n\nQuite a few large-object bugs have been fixed since 6.3.2. In fact,\nquite a few bugs of many descriptions have been fixed since 6.3.2.\n\n> 2) is 6.5 stable enough for 23.5x7 production applications.\n\nMuch more so than 6.3.2, for sure. You should actually use 6.5.1,\nor wait a few more days for 6.5.2 which has a few more bugs fixed\n(or grab the 6.5.2 beta tarball from a week or so back, or pull the\nREL6_5_PATCHES branch from the CVS repository).\n\n> 3) are large objects stable in 6.5 (where I can store and access\n> 20,000 of them regularily)\n\nThey're stable, but 20000 of them will be pretty slow (you'll end up\nwith 40000 separate files in your DB directory :-(). There has been\ntalk of fixing this by keeping multiple large objects in one file,\nbut I'd rather see the effort go into allowing tuples larger than\none disk block, which would eliminate the need for large objects\naltogether...\n\n> 4) if all my sql works in 6.3.2 will it need any changes to run under 6.5\n\nShould pretty much work. There are a few gotchas such as words that\nare reserved keywords now that weren't before --- you might have to\nrename some fields or tables, or resign yourself to double-quoting\nthose names all the time. (I got caught with a field named\n\"timestamp\", for example.)\n\nYou might also want to redesign whatever cross-client locking scheme\nyou are using. I'm in the middle of that for my company --- we used\nto just do \"BEGIN; LOCK TABLE primary_table; blah blah blah; END;\"\nin each client to ensure that concurrent updates to several distinct\ntables never caused deadlocks or apparent inconsistencies. While that\nstill *works* under 6.5, you can get a heck of a lot more concurrency\nif you understand and exploit the MVCC features.\n\n\nI'd recommend bringing up a test 6.5 installation in parallel with your\n6.3.2 installation (just give it a different install directory and\nport number) so that you can experiment before you commit to a\nchangeover. But do make the upgrade. 6.4 was a big win over 6.3.2\nfor stability in my applications, and 6.5 is better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 09:30:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Stability questions RE 6.5 and 6.3.2 & 6.3.2 problems "
}
] |
[
{
"msg_contents": "I know Tom Lane has done some work on pg_upgrade -- the last message was\non 8/2/99, and left the thread hanging.\n\nWhat is the current status of pg_upgrade in 6.5.x??\n\nI ask because the presence of a working pg_upgrade drastically reduces\nthe work necessary to get postgresql upgrading working prpoerly in an\nRPM environment. I particular, the upgrade from RedHat 6.0 to RedHat\n6.1 is going to be from postgresql 6.4.2 to 6.5.1. I do not forsee\nanyone successfully upgrading a RedHat 5.x installation to 6.1, as other\nthings will break -- although I could be entirely wrong.\n\nIf pg_upgrade is hopelessly broken in 6.5.x, that's ok -- just means a\nlittle more work.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 10:41:36 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG_UPGRADE status?"
},
{
"msg_contents": "> I know Tom Lane has done some work on pg_upgrade -- the last message was\n> on 8/2/99, and left the thread hanging.\n> \n> What is the current status of pg_upgrade in 6.5.x??\n> \n> I ask because the presence of a working pg_upgrade drastically reduces\n> the work necessary to get postgresql upgrading working prpoerly in an\n> RPM environment. I particular, the upgrade from RedHat 6.0 to RedHat\n> 6.1 is going to be from postgresql 6.4.2 to 6.5.1. I do not forsee\n> anyone successfully upgrading a RedHat 5.x installation to 6.1, as other\n> things will break -- although I could be entirely wrong.\n\npg_upgrade will not work in converting from <= 6.4.* to 6.5.* because\nthe on-disk date format changed in 6.5. Hopefully, 6.6 will allow\npg_upgrade for 6.5.* databases. We try not to change the on-disk\nformat, but sometimes we have to. MVCC required it for 6.5.*. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 14:28:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> pg_upgrade will not work in converting from <= 6.4.* to 6.5.* because\n> the on-disk date format changed in 6.5. Hopefully, 6.6 will allow\n> pg_upgrade for 6.5.* databases. We try not to change the on-disk\n> format, but sometimes we have to. MVCC required it for 6.5.*.\n\nOk, answers my question. It would be nice to be able to say:\npg_upgrade --source-pgdata=/var/lib/pgsql-old --pgdata=/var/lib/pgsql\nand have any version PostgreSQL database converted to the newest, but\nmaybe that's a pipe dream. Sure would make upgrades easier, on\neverybody, not just RedHatters -- such as those who have large amounts\nof large objects. \n\nIf I were a better C coder, and had more experience with the various\nversions' on-disk formats, I'd be happy to try to tackle it myself. \nBut, I'm not that great of a C coder, nor do I know the data structures\nwell enough. Oh well.\n\nThanks much!\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 15:04:30 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > pg_upgrade will not work in converting from <= 6.4.* to 6.5.* because\n> > the on-disk date format changed in 6.5. Hopefully, 6.6 will allow\n> > pg_upgrade for 6.5.* databases. We try not to change the on-disk\n> > format, but sometimes we have to. MVCC required it for 6.5.*.\n> \n> Ok, answers my question. It would be nice to be able to say:\n> pg_upgrade --source-pgdata=/var/lib/pgsql-old --pgdata=/var/lib/pgsql\n> and have any version PostgreSQL database converted to the newest, but\n> maybe that's a pipe dream. Sure would make upgrades easier, on\n> everybody, not just RedHatters -- such as those who have large amounts\n> of large objects. \n> \n> If I were a better C coder, and had more experience with the various\n> versions' on-disk formats, I'd be happy to try to tackle it myself. \n> But, I'm not that great of a C coder, nor do I know the data structures\n> well enough. Oh well.\n\n\nYou would have to convert tons of rows of data in raw format. Seems\nlike dump/reload would be easier.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 15:05:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Bruce Momjian wrote:\n>Lamar Owen wrote:\n> > If I were a better C coder, and had more experience with the various\n> > versions' on-disk formats, I'd be happy to try to tackle it myself.\n> > But, I'm not that great of a C coder, nor do I know the data structures\n> > well enough. Oh well.\n> \n> You would have to convert tons of rows of data in raw format. Seems\n> like dump/reload would be easier.\n\nFor normal situations, it is. However, in an RPM upgrade that occurs as\npart of an OS upgrade (say, from RedHat 6.0 to RedHat 6.1), NO daemons\ncan be run during a package upgrade. That doesn't seem too bad until you\nrealize just what an RPM upgrade does....\n\nThe nastiness gets nastier: the RPM upgrade procedure (currently)\ndeletes the old package contents after installing the new package\ncontents, removing the backend version that can read the database. You\nrpm -Uvh postgresql*.rpm across major versions, and you lose data\n(technically, you don't lose the data per se, you just lose the ability\nto read it...). And you possibly lose a postgresql user as a result. I\nknow -- it happened to me with mission-critical data. Fortunately, I\nhad been doing pg_dumpall's, so it wasn't too bad -- but it sure caught\nme off-guard! (admittedly, I was quite a newbie at the time....)\n\nI am working around that -- backing up (using an extremely restrictive\nset of commands, because this script MIGHT be running under a floppy\ninstall image...) the executables and libraries necessary to run the\nolder version BEFORE the newer executables are brought in, backing up\nthe older version's PGDATA, running the older postmaster against the\nolder PGDATA with the older backend on a different port DURING the\nstartup of the NEWER version's init, initdb with the newer version's\nbackend, run the newer postmaster WHILE the older one is running, then\npipe the output of the older pg_dumpall into a newer psql -e template1\nsession. Then, I have to verify the integrity of the transfered data,\nstop the older postmaster...etc. Piece of cake? Not quite. Why not let\nthe user do all that? Because most users can't fathom doing all of\nthat.\n\nYou can see how pg_upgrade would be useful in such a scenario, no? I'm\nnot complaining, just curious. With pg_upgrade, during the startup\nscript for the new version, I detect the version of the PGDATA I am\nrunning with, if it's an older version I first make a backup and then\npg_upgrade PGDATA. Simpler, with less likelihood of failure, IMHO. If I\nneed to do an initdb first, not a problem -- I'm already going to have\nthat in there for the case of a fresh install. \n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 15:35:22 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> [ messiness required to upgrade versions by piping data from a\n> pg_dumpall to a psql talking to the new version ]\n\nIt'd be considerably less messy, and safer, if you were willing to\nstick the pg_dump output into a file rather than piping it on the fly.\nThen (a) you wouldn't need to run both versions concurrently, and\n(b) you'd have a dump backup if something went wrong during the install.\n\nIf you compressed the dump file, which is easy enough, it'd probably\nalso take less disk space than doing it the other way. A compressed\ndump should usually be a good deal smaller than the database equivalent;\nif you do an on-the-fly transfer then the peak usage is two full\non-disk copies of the database...\n\n> You can see how pg_upgrade would be useful in such a scenario, no?\n\npg_upgrade is hardly a magic panacea --- if the on-disk formats are\nat all different, then you really have little choice short of a dump\nunder the old version and reload under the new. At most pg_upgrade\nmight help automate that process a little more.\n\nWe may have lost the option of pg_upgrade-like upgrades anyway.\nI'm still waiting to hear Vadim's opinion about whether pg_upgrade\ncan be made safe under MVCC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 17:33:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status? "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> >Lamar Owen wrote:\n> > > If I were a better C coder, and had more experience with the various\n> > > versions' on-disk formats, I'd be happy to try to tackle it myself.\n> > > But, I'm not that great of a C coder, nor do I know the data structures\n> > > well enough. Oh well.\n> > \n> > You would have to convert tons of rows of data in raw format. Seems\n> > like dump/reload would be easier.\n> \n> For normal situations, it is. However, in an RPM upgrade that occurs as\n> part of an OS upgrade (say, from RedHat 6.0 to RedHat 6.1), NO daemons\n> can be run during a package upgrade. That doesn't seem too bad until you\n> realize just what an RPM upgrade does....\n\nWow, doing a database upgrade inside an automated RPM. That's quite a\ntask. From your description, running pg_dumpall and psql to load the\ndata is a real chore in an automated system.\n\nConsidering the changes in aligment of row elements, and index table\nchanges, it would be quite difficult to write a program to convert that\ndata from one format to another. Not impossible, but quite hard.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 17:43:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Tom Lane wrote:\n \n> Lamar Owen <[email protected]> writes:\n> > [ messiness required to upgrade versions by piping data from a\n> > pg_dumpall to a psql talking to the new version ]\n> \n> It'd be considerably less messy, and safer, if you were willing to\n> stick the pg_dump output into a file rather than piping it on the fly.\n> Then (a) you wouldn't need to run both versions concurrently, and\n> (b) you'd have a dump backup if something went wrong during the install.\n\nPipe or file, both versions have to be installed at the same time, so,\neither way, it's messy. But, you are right that putting it in a file\n(which is the way I manually update now) is a little less hairy. But\nnot by much.\n\n> > You can see how pg_upgrade would be useful in such a scenario, no?\n> \n> We may have lost the option of pg_upgrade-like upgrades anyway.\n> I'm still waiting to hear Vadim's opinion about whether pg_upgrade\n> can be made safe under MVCC.\n\nI'm curious as to how difficult it would be to rewrite pg_upgrade to be\nsubstantially more intelligent in its work. Thanks to CVS, we can\naccess the on-disk formats for any version since creation -- ergo, why\ncan't a program be written that can understand all of those formats and\nconvert to the latest and greatest without a backend running? All of\nthe code to deal with any version is out there in CVS already. It's\njust a matter of writing conversion routines that:\n\n0.)\tBackup PGDATA.\n1.)\tDetermine the source PGDATA version.\n2.)\tLoad a storage manager (for reading) corresponding to that version.\n3.)\tLoad a storage manager (for writing) corresponding to latest\nversion.\n4.)\tTransfer tuples sequentially from old to new.\n5.)\tWalk the PGDATA hierarchy for each and every database directory,\nthen update PG_VERSION and other needed files.\n\nWhat am I missing (in concept -- I know there are alot of details that\nI'm skimming over)? The hard part is getting storage readers for every\nmajor version -- and there's not been THAT many on-disk format changes,\nhas there?\n\nNow, I realize that this upgrading would HAVE to be done with no\nbackends running and no transactions outstanding -- IOW, you only want\nthe latest version of a tuple anyway. Was this the issue with\npg_upgrade and MVCC, or am I misunderstanding it?\n\nJust the ramblings of a packager trying to make upgrades a little less\npainful for the masses.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 18:07:09 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Lamar Owen wrote: \n> > For normal situations, it is. However, in an RPM upgrade that occurs as\n> > part of an OS upgrade (say, from RedHat 6.0 to RedHat 6.1), NO daemons\n> > can be run during a package upgrade. That doesn't seem too bad until you\n> > realize just what an RPM upgrade does....\n> \n> Wow, doing a database upgrade inside an automated RPM. That's quite a\n> task. From your description, running pg_dumpall and psql to load the\n> data is a real chore in an automated system.\n\nOliver Elphik has done this for the Debian packages -- but debs don't\nhave some of the draconian restrictions RPM's do. In particular, and\nRPM that is packaged in the Official Boxed Set CANNOT under any\ncircumstances ask for input from the user, nor can it output anything to\nthe user. RPM's that do so get kicked out of the boxed set. And,\nfrankly, PostgreSQL's position in the boxed set is a Big Win.\n\n> Considering the changes in aligment of row elements, and index table\n> changes, it would be quite difficult to write a program to convert that\n> data from one format to another. Not impossible, but quite hard.\n\nReference my message to Tom Lane. Yes, such a program would be hard --\nbut most of it is already written and available in CVS -- thank God for\nCVS! -- all that's needed is to extract the storage managers for each\nmajor version, extract the reading code, etc, to get the on-disk\nrepresentation to an intermediate in memory form, then write it out with\nthe latest and greatest storage manager (into a different file, of\ncourse, until the upgrade is finished). Unless I badly misunderstand\nthe way PostgreSQL does things, that should work -- but I may not have\nexpressed it the same way I see it in my mind.\n\nI'm talking about a stripped down backend, in essence, whose only\npurpose in life is to copy in and copy out -- but who has the unique\nability to read with one storage manager and write with another. You\nsimply choose which storge manager is used for reading by the version of\nthe PGDATA tree.\n\nPiecing together the right CVS code snippets will be a challenge.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 18:17:52 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Tom Lane wrote:\n>> It'd be considerably less messy, and safer, if you were willing to\n>> stick the pg_dump output into a file rather than piping it on the fly.\n>> Then (a) you wouldn't need to run both versions concurrently, and\n>> (b) you'd have a dump backup if something went wrong during the install.\n\n> Pipe or file, both versions have to be installed at the same time, so,\n> either way, it's messy.\n\nEr, no, that's the whole point. The easy way to attack this is\n\t(1) While running old installation, pg_dumpall into a file.\n\t(2) Shut down old postmaster, blow away old database files.\n\t(3) Install new version, initdb, start new postmaster.\n\t(4) Restore from pg_dump output file.\n\n> I'm curious as to how difficult it would be to rewrite pg_upgrade to be\n> substantially more intelligent in its work. Thanks to CVS, we can\n> access the on-disk formats for any version since creation -- ergo, why\n> can't a program be written that can understand all of those formats and\n> convert to the latest and greatest without a backend running? All of\n> the code to deal with any version is out there in CVS already.\n\nGo for it ;-).\n\n> Now, I realize that this upgrading would HAVE to be done with no\n> backends running and no transactions outstanding -- IOW, you only want\n> the latest version of a tuple anyway. Was this the issue with\n> pg_upgrade and MVCC, or am I misunderstanding it?\n\nThe issue with MVCC is that the state of a tuple isn't solely determined\nby what is in the disk file for its table; you have to also consult\npg_log to see whether recent transactions have been committed or not.\npg_upgrade doesn't import the old pg_log into the new database (and\ncan't very easily, since the new database will have its own), so there's\na problem with recent tuples possibly getting lost.\n\nOTOH, it seems to me that this was true in older releases as well\n(pg_log has always been critical data), so I guess I'm not clear on\nwhy pg_upgrade worked at all, ever...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 18:22:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status? "
},
{
"msg_contents": "> Reference my message to Tom Lane. Yes, such a program would be hard --\n> but most of it is already written and available in CVS -- thank God for\n> CVS! -- all that's needed is to extract the storage managers for each\n> major version, extract the reading code, etc, to get the on-disk\n> representation to an intermediate in memory form, then write it out with\n> the latest and greatest storage manager (into a different file, of\n> course, until the upgrade is finished). Unless I badly misunderstand\n> the way PostgreSQL does things, that should work -- but I may not have\n> expressed it the same way I see it in my mind.\n\nDo a cost/benefit analysis on that one. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 18:34:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "> The issue with MVCC is that the state of a tuple isn't solely determined\n> by what is in the disk file for its table; you have to also consult\n> pg_log to see whether recent transactions have been committed or not.\n> pg_upgrade doesn't import the old pg_log into the new database (and\n> can't very easily, since the new database will have its own), so there's\n> a problem with recent tuples possibly getting lost.\n> \n> OTOH, it seems to me that this was true in older releases as well\n> (pg_log has always been critical data), so I guess I'm not clear on\n> why pg_upgrade worked at all, ever...\n\nAt the end of pg_upgrade, there are the lines:\n\n\tmv -f $OLDDIR/pg_log data\n\tmv -f $OLDDIR/pg_variable data\n\t\n\techo \"You may remove the $OLDDIR directory with 'rm -r $OLDDIR'.\"\n\texit 0\n\nThis is used to get the proper transaction status into the new\ninstallation. Is the VACUUM added to pg_upgrade necessary?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 18:40:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> pg_upgrade doesn't import the old pg_log into the new database (and\n>> can't very easily, since the new database will have its own), so there's\n>> a problem with recent tuples possibly getting lost.\n\n> At the end of pg_upgrade, there are the lines:\n> \tmv -f $OLDDIR/pg_log data\n> \tmv -f $OLDDIR/pg_variable data\n> This is used to get the proper transaction status into the new\n> installation. Is the VACUUM added to pg_upgrade necessary?\n\nI'm sorry, I had that backwards (knew I shoulda checked the code).\n\npg_upgrade *does* overwrite the destination pg_log, and what that\nmeans is that incoming tuples in user relations should be fine.\nWhat's at risk is recently-committed tuples in the system relations,\nnotably the metadata that pg_upgrade has just inserted for those\nuser relations.\n\nThe point of the VACUUM is to try to ensure that everything\nin the system relations is marked as certainly committed (or\ncertainly dead) before we discard the pg_log information.\nI don't recall ever hearing from Vadim about whether that\nis a trustworthy way of doing it, however.\n\nOne thing that occurs to me just now is that we probably need\nto vacuum *each* database in the new installation. The patch\nI added to pg_dump doesn't do the job because it only vacuums\nwhichever database was dumped last by pg_dumpall...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 20:27:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status "
},
{
"msg_contents": "> pg_upgrade *does* overwrite the destination pg_log, and what that\n> means is that incoming tuples in user relations should be fine.\n> What's at risk is recently-committed tuples in the system relations,\n> notably the metadata that pg_upgrade has just inserted for those\n> user relations.\n> \n> The point of the VACUUM is to try to ensure that everything\n> in the system relations is marked as certainly committed (or\n> certainly dead) before we discard the pg_log information.\n> I don't recall ever hearing from Vadim about whether that\n> is a trustworthy way of doing it, however.\n> \n> One thing that occurs to me just now is that we probably need\n> to vacuum *each* database in the new installation. The patch\n> I added to pg_dump doesn't do the job because it only vacuums\n> whichever database was dumped last by pg_dumpall...\n\nI see what you are saying now. pg_upgrade basically replaces the system\ntables, but keeps the user data and pg_log. So, if you do initdb, and\ncreate your user table, then recover the user data tables and pg_log,\nand if pg_log has a transaction marked as aborted that has the same\nnumber as one of the user create table statements, it would not see the\ntable. I see why the vacuum is needed.\n\nI wrote pg_upgrade as an attempt to do upgrades without dumping. I\nheard so little about it when it was introduced, I thought it was not\nreally being used. When I disabled it for 6.5, I found out how many\npeople were using it without incident.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Sep 1999 23:07:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status"
},
{
"msg_contents": "Bruce Momjian wrote:\n> At the end of pg_upgrade, there are the lines:\n> \n> mv -f $OLDDIR/pg_log data\n> mv -f $OLDDIR/pg_variable data\n> \n> echo \"You may remove the $OLDDIR directory with 'rm -r $OLDDIR'.\"\n> exit 0\n> \n> This is used to get the proper transaction status into the new\n> installation. Is the VACUUM added to pg_upgrade necessary?\n\nYou know, up until this message I had the mistaken impression that\npg_upgrade was a C program... Boy was I wrong. And no wonder it's\nhairy. I should have read the source first -- but nooo, I couldn't do\nthat. Open mouth, insert foot.\n\nI _am_ contemplating a C version that would do far more than just\nupgrades. I'm thinking of a pg_repair utility that could rebuild and\nrepair the on-disk structures. It would also facilitate database\nrecovery after a crash -- might be a real bear to do right. Comments?\n\nLamar Owen\n",
"msg_date": "Thu, 09 Sep 1999 11:51:09 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status"
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > Pipe or file, both versions have to be installed at the same time, so,\n> > either way, it's messy.\n> \n> Er, no, that's the whole point. The easy way to attack this is\n> (1) While running old installation, pg_dumpall into a file.\n> (2) Shut down old postmaster, blow away old database files.\n> (3) Install new version, initdb, start new postmaster.\n> (4) Restore from pg_dump output file.\n\nWould to God it were that easy! During an RPM upgrade, I have to\nobserver the following:\n1.)\tThe user types rpm -Uvh postgresql*.i386.rpm, or executes an upgrade\nfrom an older RedHat version to a newer RedHat version.\n\n2.)\tThe first rpm's preinstall script starts running. The old version\nof that rpm is still installed at this point, BUT I CAN'T EXECUTE ANY\nDAEMONS -- the upgrade MIGHT be running in the wicked chroot environment\nof the RedHat installer, with its restrictive set of commands. So, I\nCANNOT start a postmaster, nor can I be assured that a postmaster is\nrunning -- according to RedHat, since it could be running in the chroot\ninstaller, I can't even run a ps to SEE if postmaster is running\n(problems with a chrooted /proc...). Therefore, the preinstall script\nCANNOT execute pg_dumpall. I can't even run a standalone backend --\npostmaster MIGHT be running.... And, I can't test to see if I'm running\nin the installer or not... ;-( The only thing I CAN do is check /tmp for\nthe lock file.\n\n3.)\tOnce the preinstall script is finished, rpm blows in the first rpm's\nfiles. This of course overwrites the previous version.\n\n4.)\tOnce all files are blown in, the postinstall script can run. It has\nthe same restrictions that the preinstall script does, since the rpm\nCOULD be running in the chroot installer.\n\n5.)\tRepeat 2-4 for the remainder of the rpms.\n\nIf it weren't for the restrictions, it wouldn't be too hard. I think I\nhave it mostly solved -- I just have to clean up some code and do\ntesting. I'm using a two-stage plan -- the preinstall of the main\npackage (which only contains clients, client libraries, and\ndocumentation) detects whether an old version of PGDATA is there or\nnot. If it is, a backup of the PGDATA tree is performed. The hard part\nthere is making sure a backend isn't running -- I haven't figured out\nhow to reliably detect a running postmaster without /proc or ps. The\nlock file would seem to be a reliable flag -- but, what if the last\ninvocation of postmaster crashed for some reason, left the lockfile, and\nthe user, on the next boot, decides to upgrade versions of RedHat....\n\nStage two is performed in the server package's startup script\n(/etc/rc.d/init.d/postgresql) -- it detects the backup, cleans up\nPGDATA, initdb's, dumps the data from the old PGDATA (with the old\nbinaries), and restores the data with the new binaries.\n\n> > convert to the latest and greatest without a backend running? All of\n> > the code to deal with any version is out there in CVS already.\n> \n> Go for it ;-).\n\nFor some reason, I just KNEW you'd say that :-). Given six months of\nspare time, I probably could. But, in the meantime, people's databases\nare getting farkled by rpm upgrades, so I have to solve the problem.\n\n> > the latest version of a tuple anyway. Was this the issue with\n> > pg_upgrade and MVCC, or am I misunderstanding it?\n> \n> The issue with MVCC is that the state of a tuple isn't solely determined\n> by what is in the disk file for its table; you have to also consult\n> pg_log to see whether recent transactions have been committed or not.\n> pg_upgrade doesn't import the old pg_log into the new database (and\n> can't very easily, since the new database will have its own), so there's\n> a problem with recent tuples possibly getting lost.\n\nThe behavior I'm describing for pg_upgrade (let me name my program\nsomething different, for clarity, pg_data_uprev) is to take an old\nPGDATA tree, and convert it to new format into a blank, non-initdbed\ntree, and get a consistent new format PGDATA tree. Thus, there are no\nexisting files at all to worry with. Visualize a filter -- old-PGDATA\n-> pg_data_uprev -> new-PGDATA, with no backends involved at all.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Thu, 09 Sep 1999 11:59:26 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": "> You know, up until this message I had the mistaken impression that\n> pg_upgrade was a C program... Boy was I wrong. And no wonder it's\n> hairy. I should have read the source first -- but nooo, I couldn't do\n> that. Open mouth, insert foot.\n\nYes, a quick few hour hack to do a quick upgrade. Worked better than I\nthought it would.\n\n> I _am_ contemplating a C version that would do far more than just\n> upgrades. I'm thinking of a pg_repair utility that could rebuild and\n> repair the on-disk structures. It would also facilitate database\n> recovery after a crash -- might be a real bear to do right. Comments?\n\nA bear.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Sep 1999 12:20:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status"
},
{
"msg_contents": "> 2.)\tThe first rpm's preinstall script starts running. The old version\n> of that rpm is still installed at this point, BUT I CAN'T EXECUTE ANY\n> DAEMONS -- the upgrade MIGHT be running in the wicked chroot environment\n> of the RedHat installer, with its restrictive set of commands. So, I\n> CANNOT start a postmaster, nor can I be assured that a postmaster is\n> running -- according to RedHat, since it could be running in the chroot\n> installer, I can't even run a ps to SEE if postmaster is running\n> (problems with a chrooted /proc...). Therefore, the preinstall script\n> CANNOT execute pg_dumpall. I can't even run a standalone backend --\n> postmaster MIGHT be running.... And, I can't test to see if I'm running\n> in the installer or not... ;-( The only thing I CAN do is check /tmp for\n> the lock file.\n\nThis seems almost impossible to handle. I have enough trouble wrinting\nPostgreSQL C code when I have total control over the environment.\n\nBTW, you can check for a running backend by trying to telnet to the 5432\nport, or trying to do a connection to the unix domain socket.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Sep 1999 12:26:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status?"
},
{
"msg_contents": ">> I _am_ contemplating a C version that would do far more than just\n>> upgrades. I'm thinking of a pg_repair utility that could rebuild and\n>> repair the on-disk structures. It would also facilitate database\n>> recovery after a crash -- might be a real bear to do right. Comments?\n\n> A bear.\n\nIndeed, but also an incredibly valuable contribution if you can pull it\noff. If you want to tackle this task, don't let us discourage you!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Sep 1999 12:49:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status "
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> 2.)\tThe first rpm's preinstall script starts running. The old version\n> of that rpm is still installed at this point, BUT I CAN'T EXECUTE ANY\n> DAEMONS -- the upgrade MIGHT be running in the wicked chroot environment\n> of the RedHat installer, with its restrictive set of commands. So, I\n> CANNOT start a postmaster, nor can I be assured that a postmaster is\n> running -- according to RedHat, since it could be running in the chroot\n> installer, I can't even run a ps to SEE if postmaster is running\n> (problems with a chrooted /proc...). Therefore, the preinstall script\n> CANNOT execute pg_dumpall.\n\nchroot? Where are you chrooted to? It would seem from your description\nthat neither the preinstall nor postinstall scripts can even see the\n/usr/local/pgsql directory tree, which would make it impossible to do\nanything --- and would be an incredibly stupid way to design an\ninstaller system, so I have to assume I'm misreading what you wrote.\n\nAlso, if the pre/postinstall scripts cannot contact existing processes,\nthen there is no hope of killing/restarting any kind of daemon process,\nnot just Postgres in particular. The restrictions you claim are there\nwould make RPMs unusable for upgrading *anything* that has a\ncontinuously running server process. Is Red Hat really that far out\nin left field?\n\n> I can't even run a standalone backend --\n> postmaster MIGHT be running.... And, I can't test to see if I'm running\n> in the installer or not... ;-( The only thing I CAN do is check /tmp for\n> the lock file.\n\nchroot would generally imply that you can't see the regular /tmp dir,\neither.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Sep 1999 13:05:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status? "
},
{
"msg_contents": "Tom Lane wrote:\n \n> Lamar Owen <[email protected]> writes:\n> > DAEMONS -- the upgrade MIGHT be running in the wicked chroot environment\n> > of the RedHat installer, with its restrictive set of commands. So, I\n\n> chroot? Where are you chrooted to? It would seem from your description\n> that neither the preinstall nor postinstall scripts can even see the\n> /usr/local/pgsql directory tree, which would make it impossible to do\n> anything --- and would be an incredibly stupid way to design an\n> installer system, so I have to assume I'm misreading what you wrote.\n\nI think you are misreading what I wrote, which is not at all surprising\n-- it took me awhile to grok it.\n\nNo, during the installation of a version of RedHat Linux, the installer\n(which boots off of either a floppy set or a virtual El Torito image on\nCD) installs all the RPM's to the new root filesystem under chroot to\nthat new root filesystem. Thus, the real root is /dev/fd0 or whatever\nthe El Torito image's /dev entry is. The new root is mounted in a\ndirectory off of the real root, and the rpm is installed with a chroot\nto the new very incomplete root. Fortunately, PostgreSQL gets \ninstalled down the list quite a ways, as P is after the halfway point.\n\nTo add to the confusion, there IS no /usr/local/pgsql -- RedHat has\nmunged the installation around to conform to the FSSTND for Linux --\nmeaning that the PostgreSQL binaries go in /usr/bin, the libraries go in\n/usr/lib, the templates and other libraries that would ordinarily go in\nPGLIB go in /usr/lib/pgsql, and PGDATA is /var/lib/pgsql. The goal is a\nread-only /usr, but they are a little ways from that. And that is OK, as\nRPM keeps a database of what file belongs to what package.\n\n> Also, if the pre/postinstall scripts cannot contact existing processes,\n> then there is no hope of killing/restarting any kind of daemon process,\n> not just Postgres in particular. The restrictions you claim are there\n> would make RPMs unusable for upgrading *anything* that has a\n> continuously running server process.\n\nThe restrictions are only on RPM's that ship as part of the Official\nBoxed Set. RPM's are designed to be totally self-contained --\ndependencies are rigorously specified (such as the PostgreSQL RPM's\ndependency upon chkconfig to set the init sequence number), and\nassumptions are nil. I can do very little in the pre and post scripts\n-- making an offline backup of PGDATA and the essential executables and\nlibraries needed to restore the old PGDATA is the extent of it. Of\ncourse, I then have to contend with the user who upgrades with\npostmaster running.... \n\nTo summarize: RPM's that ship as part of the RedHat Official Boxed Set\n(OBS) (which PostgreSQL does), must contend with two very different\ninstallation environments:\n1.)\tThe chroot installer at initial operating system install time, and\nits OS upgrade alter ego;\n2.)\tThe environment of rpm -U, whether initiated by the user or by proxy\n(such as AutoRPM), which is an entirely NORMAL environment where you can\ndo anything you want.\n\nOther RPM's that do not ship as part of the OBS do not have the\nrestrictions of 1. However, being in the OBS is a very desireable\nplace, as that assures that ALL RedHat users have the opportunity to use\nPostgreSQL -- and, in fact, PostgreSQL is the ONLY RDBMS RedHat is\nshipping, giving us tremendous exposure.\n\n> Is Red Hat really that far out\n> in left field?\n\nIf you want to call it left field, yes, they are. RPM's are the HTML of\nthe package managers -- the author has little to no control over\npresentation -- that is, package installation order, or, for that\nmatter, whether the install time scripts even get run (rpm --noscripts,\nanyone...). It is a very _different_ environment.\n\n> chroot would generally imply that you can't see the regular /tmp dir,\n> either.\n\nThe mounted root /tmp is visible BECAUSE of the chroot in the installer\n-- but Bruce's suggestion of connecting to port 5432 is a better idea. \nAlthough, in the installer, I can't do that either... ;-(. I guess I\nneed to first detect whether we're in the installer or not. And RedHat\ndoesn't want me to be able to do that. Catch 22.\n\nThanks -- the discussion is helping me find holes in my strategy.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Thu, 09 Sep 1999 13:50:10 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "RPM restrictions (was:Re: [HACKERS] PG_UPGRADE status?)"
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> pg_upgrade doesn't import the old pg_log into the new database (and\n> >> can't very easily, since the new database will have its own), so there's\n> >> a problem with recent tuples possibly getting lost.\n> \n> > At the end of pg_upgrade, there are the lines:\n> > \tmv -f $OLDDIR/pg_log data\n> > \tmv -f $OLDDIR/pg_variable data\n> > This is used to get the proper transaction status into the new\n> > installation. Is the VACUUM added to pg_upgrade necessary?\n> \n> I'm sorry, I had that backwards (knew I shoulda checked the code).\n> \n> pg_upgrade *does* overwrite the destination pg_log, and what that\n> means is that incoming tuples in user relations should be fine.\n> What's at risk is recently-committed tuples in the system relations,\n> notably the metadata that pg_upgrade has just inserted for those\n> user relations.\n> \n> The point of the VACUUM is to try to ensure that everything\n> in the system relations is marked as certainly committed (or\n> certainly dead) before we discard the pg_log information.\n> I don't recall ever hearing from Vadim about whether that\n> is a trustworthy way of doing it, however.\n> \n> One thing that occurs to me just now is that we probably need\n> to vacuum *each* database in the new installation. The patch\n> I added to pg_dump doesn't do the job because it only vacuums\n> whichever database was dumped last by pg_dumpall...\n> \n\nI have modified pg_upgrade to vacuum all databases, as you suggested.\n\n\tcopy pg_shadow from stdin;\n\t\\.\n->\tVACUUM;\n\t\\connect template1 postgres\n\tcreate database test;\n\t\\connect test postgres\n\t\\connect - postgres\n\tCREATE TABLE \"t1\" (\n\nI left your vacuum in there to vacuum the last database. This should\nhelp.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 14:03:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PG_UPGRADE status"
}
] |
[
{
"msg_contents": "I have configured, built, and installed ProsgreSQL 6.5.1 using:\n\n --with-tcl and --with-tkconfig=<dir>\n\nAs far as I have looked, everything was built and installed without a\nhitch. The pltcl library is in the right location, and /etc/ld.so.conf\nlists that directory.\n\nWhen I define a function using pltcl I get the following:\n\n ERROR: Unrecognized language specified in a CREATE FUNCTION:\n 'pltcl'. Recognized languages are sql, C, internal and the\n created procedural languages.\n\nThe docs say that pltcl is enabled if it is built with the TCL\noption. What am I missing?\n\nThanks\n-- \nPatrick Logan [email protected]\n",
"msg_date": "Wed, 08 Sep 1999 16:49:09 GMT",
"msg_from": "Patrick Logan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem enabling pltcl"
},
{
"msg_contents": "Patrick Logan wrote:\n> ERROR: Unrecognized language specified in a CREATE FUNCTION:\n> 'pltcl'. Recognized languages are sql, C, internal and the\n> created procedural languages.\n> \n> The docs say that pltcl is enabled if it is built with the TCL\n> option. What am I missing?\n\nCREATE LANGUAGE (command line utility 'createlang'). See the regression\ntest shell script (src/test/regress/regress.sh) for an example using\nplpgsql. The PL's are not created and installed by default, apparently.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Wed, 08 Sep 1999 13:30:10 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem enabling pltcl"
},
{
"msg_contents": "Lamar Owen <[email protected]> wrote:\n: Patrick Logan wrote:\n:> ERROR: Unrecognized language specified in a CREATE FUNCTION:\n:> 'pltcl'. Recognized languages are sql, C, internal and the\n:> created procedural languages.\n:> \n:> The docs say that pltcl is enabled if it is built with the TCL\n:> option. What am I missing?\n\n: CREATE LANGUAGE (command line utility 'createlang'). See the regression\n: test shell script (src/test/regress/regress.sh) for an example using\n: plpgsql. The PL's are not created and installed by default, apparently.\n\nThanks. I also had to create the handler function as per the\ndocumentation for creating new procedural language interfaces.\n\nBoy, the documentation sure read to me like all that was supposed to\nbe done automatically by the Makefile when configured for pltcl.\n\nNot a big deal, but it wasn't clear to me this had to be done for each\ndatabase created. Is this a bug in the documentation?\n\n-- \nPatrick Logan [email protected]\n",
"msg_date": "Wed, 08 Sep 1999 20:32:09 GMT",
"msg_from": "Patrick Logan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem enabling pltcl"
},
{
"msg_contents": "Patrick Logan <[email protected]> writes:\n> : CREATE LANGUAGE (command line utility 'createlang'). See the regression\n> : test shell script (src/test/regress/regress.sh) for an example using\n> : plpgsql. The PL's are not created and installed by default, apparently.\n\n> Boy, the documentation sure read to me like all that was supposed to\n> be done automatically by the Makefile when configured for pltcl.\n> Not a big deal, but it wasn't clear to me this had to be done for each\n> database created. Is this a bug in the documentation?\n\nProbably. You should be able to just use the createlang utility without\nworrying about the details, but I don't think that the install process\nought to do it for you. The procedural languages are supposed to be\ninstallable on a per-database basis, in case you want them in some\ndatabases and not others.\n\nYou *can* do a one-time install of a language for a whole installation,\nby installing the language into template1 before you create any working\ndatabases --- this works because \"create database\" clones whatever is in\ntemplate1. (I believe that holds for anything you stick in template1,\nBTW, not just languages.)\n\nBut if the install process were to install pltcl into template1 just\nbecause you had chosen to build pltcl, then you'd lose the option of\nonly having it in some of your databases.\n\nBottom line: I think the install process is correct as is, but the docs\nneed to be updated to mention these considerations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 1999 17:42:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Problem enabling pltcl "
},
{
"msg_contents": ">\n> Patrick Logan wrote:\n> > ERROR: Unrecognized language specified in a CREATE FUNCTION:\n> > 'pltcl'. Recognized languages are sql, C, internal and the\n> > created procedural languages.\n> >\n> > The docs say that pltcl is enabled if it is built with the TCL\n> > option. What am I missing?\n>\n> CREATE LANGUAGE (command line utility 'createlang'). See the regression\n> test shell script (src/test/regress/regress.sh) for an example using\n> plpgsql. The PL's are not created and installed by default, apparently.\n\n Yepp - it's a doc mistake because first I made it that way\n and we decided later not to install by default into template1\n and provide createlang instead.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Sep 1999 12:53:10 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem enabling pltcl"
},
{
"msg_contents": "Tom Lane wrote:\n\n> But if the install process were to install pltcl into template1 just\n> because you had chosen to build pltcl, then you'd lose the option of\n> only having it in some of your databases.\n\n You still have that option even if it is installed in\n template1. But you must do it the other way round and use\n destroydb on the databases where you don't want it :-)\n\n> Bottom line: I think the install process is correct as is, but the docs\n> need to be updated to mention these considerations.\n\n The docs where right for a short time during v6.5\n development.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 9 Sep 1999 12:58:56 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Problem enabling pltcl"
},
{
"msg_contents": "> Bottom line: I think the install process is correct as is, but the docs\n> need to be updated to mention these considerations.\n\nAny takers? Look in doc/src/sgml/*.sgml for the doc sources; usually\ngrepping for a phrase is enough to figure out which source file you\nneed to change.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 09 Sep 1999 12:33:14 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Problem enabling pltcl"
}
] |
[
{
"msg_contents": "Hi\n\nOkee, I have caught the vacuum analyse crash that was giving me a load of\ngrief\n\nFirstly, I create the following database\n\n$psql crashtest\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i586-pc-linux-gnu, compiled by gcc -ggdb ]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: crashtest\n\ncrashtest=> select version();\nversion \n-------------------------------------------------------------\nPostgreSQL 6.5.1 on i586-pc-linux-gnu, compiled by gcc -ggdb \n(1 row)\n\ncrashtest=> \\d\nDatabase = crashtest\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | test | testtable | table |\n +------------------+----------------------------------+----------+\n\ncrashtest=> select distinct * from testtable;\nvara|varb\n----+----\nabc |def \n(1 row)\n\ncrashtest=> select count(*) from testtable;\n count\n------\n319800\n(1 row)\n\n-----------------------------------\n\nNow, I then create a text file with 780 (just a random large number) new\nlines that are exactly the same as the ones already in the database, so they\ncan be added with the copy command.\n\n(the 319,800 rows I have in my database were actually created using copy\nof this file, but I think that that is irrelavent).\n\nI then run the followiung script (which you will note just runs forever:\n\n------------------------------------------------------\n#!/bin/sh -\n\n(\n while [ \"1\" = \"1\" ]\n do\n\n echo \"create temp table temptest ( vara text, varb text );\"\n echo \"copy temptest from '/tmp/copyeffort';\"\n\n echo \"insert into testtable select * from temptest;\"\n \n echo \"drop table temptest;\"\n\n sleep 1\n\n done\n\n) | psql crashtest\n\n-----------------------------------------------------\n\nI attach gdb to the backend that is handling this task\n\nI then start a psql session to the same database\n\nThen I attach another GDB to this backend too, so I have GDB on both\n\nThen, in psql, I run and get the following result:\n\n-----------\n\ncrashtest=> vacuum analyze;\nNOTICE: Rel pg_type: TID 4/3: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/5: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/6: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/7: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/8: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/9: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/10: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/11: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_attribute: TID 23/12: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: Rel pg_class: TID 3/22: InsertTransactionInProgress 129915 - can't shrink relation\nNOTICE: AbortTransaction and not in in-progress state \n\n------------\n\nAt this point, the backend attatched to the script doing the creates inserts\nand drops has the following:\n\n\nProgram received signal SIGUSR2, User defined signal 2.\n0x4018db4d in __libc_fsync ()\n(gdb) where\n#0 0x4018db4d in __libc_fsync ()\n#1 0x80d5908 in pg_fsync (fd=3) at fd.c:202\n#2 0x80d615a in FileSync (file=3) at fd.c:876\n#3 0x80dc9ed in mdcommit () at md.c:796\n#4 0x80dd1bc in smgrcommit () at smgr.c:375\n#5 0x80d47f7 in FlushBufferPool (StableMainMemoryFlag=0) at bufmgr.c:1260\n#6 0x8078755 in RecordTransactionCommit () at xact.c:636\n#7 0x8078919 in CommitTransaction () at xact.c:940\n#8 0x8078ac1 in CommitTransactionCommand () at xact.c:1177\n#9 0x80df0cf in PostgresMain (argc=-1073742286, argv=0xbffff7a0, real_argc=7, \n real_argv=0xbffffd04) at postgres.c:1679\n#10 0x80c91ba in DoBackend (port=0x81ea4a0) at postmaster.c:1628\n#11 0x80c8cda in BackendStartup (port=0x81ea4a0) at postmaster.c:1373\n#12 0x80c8429 in ServerLoop () at postmaster.c:823\n#13 0x80c7f67 in PostmasterMain (argc=7, argv=0xbffffd04) at postmaster.c:616\n#14 0x80a0986 in main (argc=7, argv=0xbffffd04) at main.c:97\n#15 0x400fccb3 in __libc_start_main (main=0x80a0920 <main>, argc=7, \n argv=0xbffffd04, init=0x8061360 <_init>, fini=0x810d63c <_fini>, \n rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffcfc)\n at ../sysdeps/generic/libc-start.c:78\n(gdb) \nProgram received signal SIGUSR2, User defined signal 2.\n0x4018db4d in __libc_fsync ()\n(gdb) where\n#0 0x4018db4d in __libc_fsync ()\n#1 0x80d5908 in pg_fsync (fd=3) at fd.c:202\n#2 0x80d615a in FileSync (file=3) at fd.c:876\n#3 0x80dc9ed in mdcommit () at md.c:796\n#4 0x80dd1bc in smgrcommit () at smgr.c:375\n#5 0x80d47f7 in FlushBufferPool (StableMainMemoryFlag=0) at bufmgr.c:1260\n#6 0x8078755 in RecordTransactionCommit () at xact.c:636\n#7 0x8078919 in CommitTransaction () at xact.c:940\n#8 0x8078ac1 in CommitTransactionCommand () at xact.c:1177\n#9 0x80df0cf in PostgresMain (argc=-1073742286, argv=0xbffff7a0, real_argc=7, \n real_argv=0xbffffd04) at postgres.c:1679\n#10 0x80c91ba in DoBackend (port=0x81ea4a0) at postmaster.c:1628\n#11 0x80c8cda in BackendStartup (port=0x81ea4a0) at postmaster.c:1373\n#12 0x80c8429 in ServerLoop () at postmaster.c:823\n#13 0x80c7f67 in PostmasterMain (argc=7, argv=0xbffffd04) at postmaster.c:616\n#14 0x80a0986 in main (argc=7, argv=0xbffffd04) at main.c:97\n#15 0x400fccb3 in __libc_start_main (main=0x80a0920 <main>, argc=7, \n argv=0xbffffd04, init=0x8061360 <_init>, fini=0x810d63c <_fini>, \n rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffcfc)\n at ../sysdeps/generic/libc-start.c:78\n(gdb) \n\nAbout 3 - 5 seconds later, the gdb attached to the vacuuming psql gets the\nfollowing - Bear in mind this is a DIFFERENT BACKEND\n\nProgram received signal SIGSEGV, Segmentation fault.\nAllocSetReset (set=0x0) at aset.c:159\naset.c:159: No such file or directory.\n(gdb) where\n#0 AllocSetReset (set=0x0) at aset.c:159\n#1 0x810ac12 in EndPortalAllocMode () at portalmem.c:938\n#2 0x8078833 in AtAbort_Memory () at xact.c:800\n#3 0x80789ff in AbortTransaction () at xact.c:1026\n#4 0x8078aef in AbortCurrentTransaction () at xact.c:1243\n#5 0x80deed6 in PostgresMain (argc=-1073742288, argv=0xbffff7a0, real_argc=7, \n real_argv=0xbffffd04) at postgres.c:1550\n#6 0x80c91ba in DoBackend (port=0x81ea4a0) at postmaster.c:1628\n#7 0x80c8cda in BackendStartup (port=0x81ea4a0) at postmaster.c:1373\n#8 0x80c8429 in ServerLoop () at postmaster.c:823\n#9 0x80c7f67 in PostmasterMain (argc=7, argv=0xbffffd04) at postmaster.c:616\n#10 0x80a0986 in main (argc=7, argv=0xbffffd04) at main.c:97\n#11 0x400fccb3 in __libc_start_main (main=0x80a0920 <main>, argc=7, \n argv=0xbffffd04, init=0x8061360 <_init>, fini=0x810d63c <_fini>, \n rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffcfc)\n at ../sysdeps/generic/libc-start.c:78\n(gdb) \n\n\n------------------------------------------------------------------------\n\nThis is REPRODUCABLE\n\nI have run this through 5 times and it only managed to vacuum two times.\n\nNow, I have the two GDBs still running, so if you need any values such\nas variable states, let me know and I will supply them. (I'll be out of\nthe house till about 7.30pm UK time tomorrow, so it will have to be after\nthen).\n\nHopefully, this will be something can get fixed for 6.5.2 as it is a BIG\nproblem for me, as it happens when I am building my postgresql search\nengine.\n\nThanks\n\t\t\t\t\t\tM Simms\n",
"msg_date": "Thu, 9 Sep 1999 02:43:50 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> Okee, I have caught the vacuum analyse crash that was giving me a load\n> of grief\n\nI spent a good while last night trying to duplicate your report, but\ncouldn't on either current sources or latest REL6_5 branch. Maybe that\nmeans we fixed the problem already --- but I think it's more likely that\nthere's a platform dependency involved, or some additional enabling\ncondition that you forgot to mention. So you'll have to keep poking\nat it. However, I do have some comments and suggestions.\n\nStarting at the end, the final crash:\n\n> Program received signal SIGSEGV, Segmentation fault.\n> AllocSetReset (set=0x0) at aset.c:159\n> aset.c:159: No such file or directory.\n> (gdb) where\n> #0 AllocSetReset (set=0x0) at aset.c:159\n> #1 0x810ac12 in EndPortalAllocMode () at portalmem.c:938\n> #2 0x8078833 in AtAbort_Memory () at xact.c:800\n> #3 0x80789ff in AbortTransaction () at xact.c:1026\n> #4 0x8078aef in AbortCurrentTransaction () at xact.c:1243\n> #5 0x80deed6 in PostgresMain (argc=-1073742288, argv=0xbffff7a0, real_argc=7, \n> real_argv=0xbffffd04) at postgres.c:1550\n\nappears to be the same bug I alluded to before: memory allocation isn't\ncleaned up properly if elog(ERROR) is executed outside a transaction.\nI know how to fix this, and will do so before 6.5.2, but fixing it will\njust prevent a coredump after an error has already occurred. What we\nneed to be looking for in your setup is the reason why an error is being\nreported at all.\n\nThe first thing you can do is look to see what the error message is ---\nit should be in the postmaster's stderr logfile, even though 6.5.* libpq\nomits to display it because of the crash. Another useful thing would be\nto set a breakpoint at elog() so that you can examine the context of the\nerror report. (Actually, since vacuum generates lots of elog(NOTICE)\nand elog(DEBUG), the breakpoint had better be further down in elog,\nperhaps where it's about to longjmp back to PostgresMain.)\n\nBTW, VACUUM runs a separate transaction for each table it works on,\nso although most of its work is done inside a transaction, there are\nshort intervals between tables where it's not in one. The error must\nbe getting reported during one of these intervals. That narrows things\ndown a lot.\n\nNow, about the SIGUSR2. That's mighty suggestive, but it's not\nnecessarily an indication of a bug. There are two reasons why a\nSIGUSR2 would be delivered to a backend. One is LISTEN/NOTIFY, which\nI assume we can discount, since you'd have to have explicitly done it.\nThe other is that the SI message buffer manager sends a broadcast\nSIGUSR2 to all the backends if it thinks its message buffer is getting\ndangerously full --- presumably that's what you saw happening. So the\nquestions raised are (a) why is the buffer getting full, and (b) could\nit have actually overflowed later, and if so did the overflow recovery\ncode work?\n\nBackground: the SI message buffer is a scheme for notifying backends\nto discard cached copies of tuples from the system tables. Whenever\na backend modifies a tuple from the system tables, it has to send out\nan SI (\"shared cache invalidation\") message telling the other backends\nit has changed tuple X in table Y, so that they discard their\nno-longer-accurate cached copies, if any. Messages in the buffer can be\ndiscarded only after all active backends have read them. Stuff like\nVACUUM tends to produce a lot of SI messages, but table\ncreation/deletion causes some too.\n\nThe SI buffer is global across an installation (one postmaster and its\nchildren), *not* per-database, so even if you only had these two\nbackends connected to your crashtest database, they could have been\naffected by anything that was going on in other databases belonging\nto the same postmaster. Were there other backends alive at the time,\nand if so what were they doing, or not doing?\n\nIf all the backends are busy, the SI buffer really shouldn't get\nanywhere near full, although I suppose it could happen under extreme\nconditions. The case where the buffer tends to fill is when one or\nmore backends are sitting idle, waiting for a client command. They'll\nonly read SI messages during transaction start (or, in 6.6, after\nacquiring a lock), so an idle backend blocks the SI buffer from\nemptying. The point of the SIGUSR2 broadcast is to kick-start idle\nbackends so that they will execute a transaction and receive their\nSI messages.\n\nI am guessing that you had an idle backend connected to some other\ndatabase of the same postmaster, so that the SI buffer was getting\nfull of the messages being generated by the VACUUM and table create/\ndelete processes. If you had no idle backends then we ought to look\nfor the reason for the SI buffer filling far enough to cause SIGUSR2;\nbut if you did then it's a normal condition.\n\nBTW, if you have a backend stopped due to a gdb breakpoint or trap,\nit's certainly not consuming SI messages. So when you left it sit after\nobserving the SIGUSR2, you altered the behavior. The VACUUM process was\nstill generating SI messages, and even though the hypothetical idle\nbackend would have eaten its messages thanks to SIGUSR2, the one you had\nblocked with gdb stopped doing so. So eventually there would have been\nan SI overflow condition. I think that would take longer than \"3-5\nseconds\", though, unless your machine is way faster than mine.\n\nIf the SI buffer does overflow because someone isn't eating his messages\npromptly, it's not supposed to be fatal. Rather, the SI manager\nreports to all the backends \"Sorry, I've lost track of what's going on,\nso I suggest you discard *all* your cached tuples.\" However, 6.5.* has\nsome bugs in the shared cache reset process. (I think I have fixed these\nfor 6.6, but there are enough poorly-tested revisions in the affected\nmodules that we'd decided not to risk trying to back-patch 6.5.*.)\n\nAnyway, back to the point: this is all very suggestive that maybe what\nyou are seeing is SI overflow and a consequent failure. But I'm not\nconvinced of it, partly because of the timing issue and partly because\nthe vacuum process would have issued a \"NOTICE: SIReadEntryData: cache\nstate reset\" message before entering the buggy code. I didn't see one\nin your trace. However, if you find instances of this message in\nyour postmaster logfile, then it's definitely a possible cause.\n(I would still wonder why SI overflow is occurring in the normal case\nwhere you're not using gdb to block a backend from responding to\nSIGUSR2.)\n\nI hope this gives you enough info to poke at the problem more\nintelligently.\n\nLastly, did you build with --enable-cassert? The assert checks slow things\ndown a little, but are often real helpful when looking for backend bugs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Sep 1999 10:45:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "I wrote:\n> appears to be the same bug I alluded to before: memory allocation isn't\n> cleaned up properly if elog(ERROR) is executed outside a transaction.\n> I know how to fix this, and will do so before 6.5.2, but fixing it will\n> just prevent a coredump after an error has already occurred.\n\nHere is the patch to fix that problem; line numbers are for current,\nbut it should apply to 6.5.1 with small offsets.\n\nAlso: have you applied the vc_abort patch discussed a month ago (see\nmy post in pgsql-patches, 11 Aug)? If not, that could well be the\nsource of your troubles. You might want to just grab the 6.5.2-beta\ntarball and apply this patch, or even better pull the current state\nof the REL6_5_PATCHES branch from the CVS server.\n\n\t\t\tregards, tom lane\n\n*** src/include/utils/portal.h.orig\tThu Jul 15 21:11:26 1999\n--- src/include/utils/portal.h\tThu Sep 9 11:59:00 1999\n***************\n*** 75,80 ****\n--- 75,81 ----\n extern void PortalDestroy(Portal *portalP);\n extern void StartPortalAllocMode(AllocMode mode, Size limit);\n extern void EndPortalAllocMode(void);\n+ extern void PortalResetHeapMemory(Portal portal);\n extern PortalVariableMemory PortalGetVariableMemory(Portal portal);\n extern PortalHeapMemory PortalGetHeapMemory(Portal portal);\n \n*** src/backend/utils/mmgr/portalmem.c.orig\tSat Jul 17 23:20:03 1999\n--- src/backend/utils/mmgr/portalmem.c\tThu Sep 9 11:59:36 1999\n***************\n*** 83,89 ****\n static void CollectNamedPortals(Portal *portalP, int destroy);\n static Portal PortalHeapMemoryGetPortal(PortalHeapMemory context);\n static PortalVariableMemory PortalHeapMemoryGetVariableMemory(PortalHeapMemory context);\n- static void PortalResetHeapMemory(Portal portal);\n static Portal PortalVariableMemoryGetPortal(PortalVariableMemory context);\n \n /* ----------------\n--- 83,88 ----\n***************\n*** 838,844 ****\n *\t\tBadArg if mode is invalid.\n * ----------------\n */\n! static void\n PortalResetHeapMemory(Portal portal)\n {\n \tPortalHeapMemory context;\n--- 837,843 ----\n *\t\tBadArg if mode is invalid.\n * ----------------\n */\n! void\n PortalResetHeapMemory(Portal portal)\n {\n \tPortalHeapMemory context;\n*** src/backend/access/transam/xact.c.orig\tSun Sep 5 13:12:34 1999\n--- src/backend/access/transam/xact.c\tThu Sep 9 12:00:23 1999\n***************\n*** 694,712 ****\n AtCommit_Memory()\n {\n \tPortal\t\tportal;\n- \tMemoryContext portalContext;\n \n \t/* ----------------\n! \t *\tRelease memory in the blank portal.\n! \t *\tSince EndPortalAllocMode implicitly works on the current context,\n! \t *\tfirst make real sure that the blank portal is the selected context.\n! \t *\t(This is probably not necessary, but seems like a good idea...)\n \t * ----------------\n \t */\n \tportal = GetPortalByName(NULL);\n! \tportalContext = (MemoryContext) PortalGetHeapMemory(portal);\n! \tMemoryContextSwitchTo(portalContext);\n! \tEndPortalAllocMode();\n \n \t/* ----------------\n \t *\tNow that we're \"out\" of a transaction, have the\n--- 694,706 ----\n AtCommit_Memory()\n {\n \tPortal\t\tportal;\n \n \t/* ----------------\n! \t *\tRelease all heap memory in the blank portal.\n \t * ----------------\n \t */\n \tportal = GetPortalByName(NULL);\n! \tPortalResetHeapMemory(portal);\n \n \t/* ----------------\n \t *\tNow that we're \"out\" of a transaction, have the\n***************\n*** 784,802 ****\n AtAbort_Memory()\n {\n \tPortal\t\tportal;\n- \tMemoryContext portalContext;\n \n \t/* ----------------\n! \t *\tRelease memory in the blank portal.\n! \t *\tSince EndPortalAllocMode implicitly works on the current context,\n! \t *\tfirst make real sure that the blank portal is the selected context.\n! \t *\t(This is ESSENTIAL in case we aborted from someplace where it wasn't.)\n \t * ----------------\n \t */\n \tportal = GetPortalByName(NULL);\n! \tportalContext = (MemoryContext) PortalGetHeapMemory(portal);\n! \tMemoryContextSwitchTo(portalContext);\n! \tEndPortalAllocMode();\n \n \t/* ----------------\n \t *\tNow that we're \"out\" of a transaction, have the\n--- 778,790 ----\n AtAbort_Memory()\n {\n \tPortal\t\tportal;\n \n \t/* ----------------\n! \t *\tRelease all heap memory in the blank portal.\n \t * ----------------\n \t */\n \tportal = GetPortalByName(NULL);\n! \tPortalResetHeapMemory(portal);\n \n \t/* ----------------\n \t *\tNow that we're \"out\" of a transaction, have the\n",
"msg_date": "Thu, 09 Sep 1999 12:36:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "> The first thing you can do is look to see what the error message is ---\n> it should be in the postmaster's stderr logfile, even though 6.5.* libpq\n> omits to display it because of the crash.\n\nWoops, I only redirected stdout. I will redirect stderr too. *2 minutes and\none crash later* Hmmm, nothing appears in the logs.\n\n/usr/bin/postmaster -S -N 128 -B 256 -D/var/lib/pgsql/data > /tmp/postmasterout 2> /tmp/postmastererr\n\nAnd nothing was in either log.\n\n> Another useful thing would be\n> to set a breakpoint at elog() so that you can examine the context of the\n> error report. (Actually, since vacuum generates lots of elog(NOTICE)\n> and elog(DEBUG), the breakpoint had better be further down in elog,\n> perhaps where it's about to longjmp back to PostgresMain.)\n> \n> BTW, VACUUM runs a separate transaction for each table it works on,\n> so although most of its work is done inside a transaction, there are\n> short intervals between tables where it's not in one. The error must\n> be getting reported during one of these intervals. That narrows things\n> down a lot.\n> \n\n<snip SI information>\n\nNow, let me think for a moment:\n\nVacuum works on each table inside a transaction\n\nThe backend only reads the SI buffer when it starts a new transaction\n\nWhat then happens if vacuum is vacuuming a BIG table (such as 300,000\nlines) whilst another process is doing create and drop tables a lot.\n\nWouldnt the buffer fill up, as it was never starting a transaction\nwhen vacuuming that big table?\n\nHowever those were the only two backends active, it is a test\ndatabase on my home machine.\n\n> an SI overflow condition. I think that would take longer than \"3-5\n> seconds\", though, unless your machine is way faster than mine.\n\nIve got an AMD K62-400 with 128 MB mmeory, not slow but not roastingly\nfast either.\n\n> I hope this gives you enough info to poke at the problem more\n> intelligently.\n> \n> Lastly, did you build with --enable-cassert? The assert checks slow things\n> down a little, but are often real helpful when looking for backend bugs.\n\nNope, I will recompile the new beta with this option, and post on the\nprogress. Thanks\n\n\t\t\t\t\tM Simms\n",
"msg_date": "Thu, 9 Sep 1999 20:42:36 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Hi\n\nVacuum couldn't preserve consistency without locking.\nI'm anxious about locking for system tables.\n\n> \n> Hi\n> \n> Okee, I have caught the vacuum analyse crash that was giving me a load of\n> grief\n>\n\n[snip]\n \n> \n> Then, in psql, I run and get the following result:\n> \n> -----------\n> \n> crashtest=> vacuum analyze;\n> NOTICE: Rel pg_type: TID 4/3: InsertTransactionInProgress 129915 \n> - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/5: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/6: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/7: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/8: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/9: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/10: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/11: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_attribute: TID 23/12: InsertTransactionInProgress \n> 129915 - can't shrink relation\n> NOTICE: Rel pg_class: TID 3/22: InsertTransactionInProgress \n> 129915 - can't shrink relation\n\nCREATE TABLE doesn't lock system tables till end of transaction.\nIt's a cause of these NOTICE messages.\n\nShould we lock system tables till end of transaction ?\n\nMoreover CREATE TABLE doesn't acquire any lock for pg_attribute\n\t\t\t ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nwhile tuples are inserted into pg_attribute.\nConcurrent vacuum may corrupt pg_attribute. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n\n",
"msg_date": "Fri, 10 Sep 1999 15:42:53 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > crashtest=> vacuum analyze;\n> > NOTICE: Rel pg_type: TID 4/3: InsertTransactionInProgress 129915\n> > - can't shrink relation\n...\n> \n> CREATE TABLE doesn't lock system tables till end of transaction.\n> It's a cause of these NOTICE messages.\n> \n> Should we lock system tables till end of transaction ?\n\nNo, if we allow DDL statements inside BEGIN/END\n(in long transaction).\n\n> Moreover CREATE TABLE doesn't acquire any lock for pg_attribute\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> while tuples are inserted into pg_attribute.\n> Concurrent vacuum may corrupt pg_attribute.\n\nShould be fixed!\n\nVadim\n",
"msg_date": "Fri, 10 Sep 1999 14:49:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": ">\n> Hiroshi Inoue wrote:\n> >\n> > > crashtest=> vacuum analyze;\n> > > NOTICE: Rel pg_type: TID 4/3: InsertTransactionInProgress 129915\n> > > - can't shrink relation\n> ...\n> >\n> > CREATE TABLE doesn't lock system tables till end of transaction.\n> > It's a cause of these NOTICE messages.\n> >\n> > Should we lock system tables till end of transaction ?\n>\n> No, if we allow DDL statements inside BEGIN/END\n> (in long transaction).\n>\n> > Moreover CREATE TABLE doesn't acquire any lock for pg_attribute\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > while tuples are inserted into pg_attribute.\n> > Concurrent vacuum may corrupt pg_attribute.\n>\n> Should be fixed!\n>\n\nSeems CREATE TABLE don't acquire any lock for pg_relcheck and\npg_attrdef as well as pg_attribute. There may be other pg_.......\n\nHere is a patch.\nThis patch also removes UnlockRelation() in heap_destroy_with_catalog().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** catalog/heap.c.orig\tTue Sep 7 08:52:04 1999\n--- catalog/heap.c\tFri Sep 10 16:43:18 1999\n***************\n*** 547,552 ****\n--- 547,553 ----\n \t */\n \tAssert(rel);\n \tAssert(rel->rd_rel);\n+ \tLockRelation(rel, AccessExclusiveLock);\n \thasindex = RelationGetForm(rel)->relhasindex;\n \tif (hasindex)\n \t\tCatalogOpenIndices(Num_pg_attr_indices, Name_pg_attr_indices, idescs);\n***************\n*** 607,612 ****\n--- 608,614 ----\n \t\tdpp++;\n \t}\n\n+ \tUnlockRelation(rel, AccessExclusiveLock);\n \theap_close(rel);\n\n \t/*\n***************\n*** 1330,1336 ****\n\n \trel->rd_nonameunlinked = TRUE;\n\n- \tUnlockRelation(rel, AccessExclusiveLock);\n\n \theap_close(rel);\n\n--- 1332,1337 ----\n***************\n*** 1543,1553 ****\n--- 1544,1556 ----\n \tvalues[Anum_pg_attrdef_adbin - 1] =\nPointerGetDatum(textin(attrdef->adbin));\n \tvalues[Anum_pg_attrdef_adsrc - 1] =\nPointerGetDatum(textin(attrdef->adsrc));\n \tadrel = heap_openr(AttrDefaultRelationName);\n+ \tLockRelation(adrel, AccessExclusiveLock);\n \ttuple = heap_formtuple(adrel->rd_att, values, nulls);\n \tCatalogOpenIndices(Num_pg_attrdef_indices, Name_pg_attrdef_indices,\nidescs);\n \theap_insert(adrel, tuple);\n \tCatalogIndexInsert(idescs, Num_pg_attrdef_indices, adrel, tuple);\n \tCatalogCloseIndices(Num_pg_attrdef_indices, idescs);\n+ \tUnlockRelation(adrel, AccessExclusiveLock);\n \theap_close(adrel);\n\n \tpfree(DatumGetPointer(values[Anum_pg_attrdef_adbin - 1]));\n***************\n*** 1606,1616 ****\n--- 1609,1621 ----\n \tvalues[Anum_pg_relcheck_rcbin - 1] =\nPointerGetDatum(textin(check->ccbin));\n \tvalues[Anum_pg_relcheck_rcsrc - 1] =\nPointerGetDatum(textin(check->ccsrc));\n \trcrel = heap_openr(RelCheckRelationName);\n+ \tLockRelation(rcrel, AccessExclusiveLock);\n \ttuple = heap_formtuple(rcrel->rd_att, values, nulls);\n \tCatalogOpenIndices(Num_pg_relcheck_indices, Name_pg_relcheck_indices,\nidescs);\n \theap_insert(rcrel, tuple);\n \tCatalogIndexInsert(idescs, Num_pg_relcheck_indices, rcrel, tuple);\n \tCatalogCloseIndices(Num_pg_relcheck_indices, idescs);\n+ \tUnlockRelation(rcrel, AccessExclusiveLock);\n \theap_close(rcrel);\n\n \tpfree(DatumGetPointer(values[Anum_pg_relcheck_rcname - 1]));\n\n\n\n",
"msg_date": "Fri, 10 Sep 1999 18:19:45 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n...\n\n> Here is a patch.\n> This patch also removes UnlockRelation() in heap_destroy_with_catalog().\n\nMarc, I would grant to Hiroshi full CVS access...\n\nVadim\n",
"msg_date": "Fri, 10 Sep 1999 17:20:21 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> Now, let me think for a moment:\n> Vacuum works on each table inside a transaction\n> The backend only reads the SI buffer when it starts a new transaction\n> What then happens if vacuum is vacuuming a BIG table (such as 300,000\n> lines) whilst another process is doing create and drop tables a lot.\n> Wouldnt the buffer fill up, as it was never starting a transaction\n> when vacuuming that big table?\n\nYup, could happen. (I think it would take several hundred create/\ndrop cycles, but that's certainly possible during a long vacuum.)\nThat's why there's code to deal with the possibility of SI buffer\noverrun.\n\nBut as I said, I'm not convinced you are dealing with an SI overrun\n--- and the lack of messages about it seems to point away from that\ntheory. I brought it up because it was a possible area for trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 1999 10:28:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> Moreover CREATE TABLE doesn't acquire any lock for pg_attribute\n>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>>>> while tuples are inserted into pg_attribute.\n>>>> Concurrent vacuum may corrupt pg_attribute.\n>> \n>> Should be fixed!\n>> \n\n> Seems CREATE TABLE don't acquire any lock for pg_relcheck and\n> pg_attrdef as well as pg_attribute. There may be other pg_.......\n\n> Here is a patch.\n\nHmm, do we really need to grab AccessExclusiveLock on the pg_ tables\nwhile creating or deleting tables? That will mean no concurrency at\nall for these operations. Seems to me we want AccessExclusiveLock on\nthe table being created or deleted, but something less strong on the\nsystem tables. RowExclusiveLock might be appropriate --- Vadim, what\ndo you think?\n\nAlso, rather than running around and adding locks to every single\nplace that calls heap_open or heap_close, I wonder whether we shouldn't\nhave heap_open/heap_close themselves automatically grab or release\nat least a minimal lock (AccessShareLock, I suppose).\n\nOr maybe better: change heap_open/heap_openr/heap_close to take an\nadditional parameter specifying the kind of lock to grab. That'd still\nmean having to visit all the call sites, but it would force people to\nthink about the issue in future rather than forgetting to lock a table\nthey're accessing.\n\nComments?\n\nBTW, while I still haven't been able to reproduce Michael Simms' crash\nreliably, I did see one coredump caused by an assert failure in\nheap_delete():\n\n#6 0x16181c in ExceptionalCondition (\n conditionName=0x283d4 \"!(( lp)->lp_flags & 0x01)\", exceptionP=0x40009a80,\n detail=0x0, fileName=0x7ae4 \"\\003\", lineNumber=1121) at assert.c:72\n#7 0x7cc18 in heap_delete (relation=0x400891c0, tid=0x402d962c, ctid=0x0)\n at heapam.c:1121\n#8 0x9c208 in DeleteAttributeTuples (rel=0x40535260) at heap.c:1118\n#9 0x9c4dc in heap_destroy_with_catalog (\n relname=0x4e4ed7 <Address 0x4e4ed7 out of bounds>) at heap.c:1310\n#10 0xa5168 in RemoveRelation (\n name=0x80db9380 <Address 0x80db9380 out of bounds>) at creatinh.c:157\n#11 0x129760 in ProcessUtility (parsetree=0x402d8d28, dest=Remote)\n at utility.c:215\n\nThis was in the process doing table creates/drops, and I surmise that\nthe problem was a tuple move executed concurrently by the process doing\nVACUUM. In other words, it looks like this problem of missing lock\noperations might be the cause, or one cause, of Michael's symptoms.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 1999 19:05:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "I wrote:\n> This was in the process doing table creates/drops, and I surmise that\n> the problem was a tuple move executed concurrently by the process doing\n> VACUUM. In other words, it looks like this problem of missing lock\n> operations might be the cause, or one cause, of Michael's symptoms.\n\nOn looking closer, that's not so, because the particular path taken here\n*does* have a lock --- DeleteAttributeTuples() acquires\nAccessExclusiveLock on pg_attribute, which is the relation heap_delete\nis failing to find a tuple in. The tuple it's trying to delete was\nlocated by means of SearchSysCacheTupleCopy().\n\nWhat I now think is that we have a variant of the SI-too-late problem:\nvacuum has moved the underlying tuple, but the backend trying to do\nthe deletion hasn't heard about it yet, because it hasn't executed\na transaction start or CommandCounterIncrement since VACUUM processed\nthe table. This is bolstered by the postmaster log, which shows the\nsecond backend dying just as VACUUM commits pg_attribute:\n\nDEBUG: Rel pg_type: Pages: 6 --> 2; Tuple(s) moved: 1. Elapsed 0/0 sec.\nDEBUG: Index pg_type_typname_index: Pages 5; Tuples 116: Deleted 1. Elapsed 0/0 sec.\nDEBUG: Index pg_type_oid_index: Pages 2; Tuples 116: Deleted 1. Elapsed 0/0 sec.\nDEBUG: --Relation pg_attribute--\nDEBUG: Pages 33: Changed 1, Reapped 28, Empty 0, New 0; Tup 438: Vac 1976, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 97, MaxLen 97; Re-using: Free/Avail. Space 214444/207148; EndEmpty/Avail. Pages 0/27. Elapsed 0/0 sec.\nDEBUG: Index pg_attribute_attrelid_index: Pages 15; Tuples 438: Deleted 1976. Elapsed 0/1 sec.\nDEBUG: Index pg_attribute_relid_attnum_index: Pages 15; Tuples 438: Deleted 1976. Elapsed 0/0 sec.\nDEBUG: Index pg_attribute_relid_attnam_index: Pages 48; Tuples 438: Deleted 1976. Elapsed 0/1 sec.\nDEBUG: Rel pg_attribute: Pages: 33 --> 6; Tuple(s) moved: 8. Elapsed 0/0 sec.\nDEBUG: Index pg_attribute_attrelid_index: Pages 15; Tuples 438: Deleted 8. Elapsed 0/0 sec.\nDEBUG: Index pg_attribute_relid_attnum_index: Pages 15; Tuples 438: Deleted 8. Elapsed 0/0 sec.\nDEBUG: Index pg_attribute_relid_attnam_index: Pages 48; Tuples 438: Deleted 8. Elapsed 0/0 sec.\nTRAP: Failed Assertion(\"!(( lp)->lp_flags & 0x01):\", File: \"heapam.c\", Line: 1121)\n\n!(( lp)->lp_flags & 0x01) (0) [Not a typewriter]\nDEBUG: --Relation pg_proc--\nDEBUG: Pages 21: Changed 0, Reapped 0, Empty 0, New 0; Tup 1021: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 145, MaxLen 197; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n(vacuum manages to get through a couple more tables before hearing the\n\"thou shalt exit\" signal from the postmaster...)\n\n\nIt's looking to me like there may be no way to fix this in 6.5.*\nshort of adopting the recent 6.6 relcache/SI changes. Specifically,\nthe one we need is reading SI messages after acquiring a lock, but\nI doubt we can pull out just that one without the rest.\n\nI'm not real eager to do this given the little amount of testing\nthose changes have had, but maybe we have no choice...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 1999 19:48:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Also, rather than running around and adding locks to every single\n> place that calls heap_open or heap_close, I wonder whether we shouldn't\n> have heap_open/heap_close themselves automatically grab or release\n> at least a minimal lock (AccessShareLock, I suppose).\n\nThis could result in deadlocks...\n\n> Or maybe better: change heap_open/heap_openr/heap_close to take an\n> additional parameter specifying the kind of lock to grab. That'd still\n> mean having to visit all the call sites, but it would force people to\n> think about the issue in future rather than forgetting to lock a table\n> they're accessing.\n\nThis way is better.\n\nVadim\n",
"msg_date": "Tue, 14 Sep 1999 02:30:33 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Also, rather than running around and adding locks to every single\n> > place that calls heap_open or heap_close, I wonder whether we shouldn't\n> > have heap_open/heap_close themselves automatically grab or release\n> > at least a minimal lock (AccessShareLock, I suppose).\n> \n> This could result in deadlocks...\n> \n> > Or maybe better: change heap_open/heap_openr/heap_close to take an\n> > additional parameter specifying the kind of lock to grab. That'd still\n> > mean having to visit all the call sites, but it would force people to\n> > think about the issue in future rather than forgetting to lock a table\n> > they're accessing.\n> \n> This way is better.\n\nJust a reminder. heap_getnext() already locks the _buffer_, and\nheap_fetch() requires you pass a variable to hold the buffer number, so\nyou can release the buffer lock when you are done.\n\nThis was not the case in < 6.4 releases, and there is no reason not to\nadd additional parameters to function calls like I did for heap_fetch() if\nit makes sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 14:52:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Tom Lane wrote:\n>>>> Or maybe better: change heap_open/heap_openr/heap_close to take an\n>>>> additional parameter specifying the kind of lock to grab.\n\n>> This way is better.\n\n> ... there is no reason not to add additional parameters to function\n> calls like I did for heap_fetch() if it makes sense.\n\nOK. Another thing that's been on my to-do list is that a lot of places\nfail to check for a failure return from heap_open(r), which means you\nget a null pointer dereference coredump instead of a useful message if\nanything goes wrong. (But, of course, when opening a system table\nnothing can ever go wrong ... go wrogn ...)\n\nHere's what I propose:\n\nAdd another parameter to heap_open and heap_openr, which can be any of\nthe lock types currently mentioned in storage/lmgr.h, or \"NoLock\".\nWith \"NoLock\" you get the current behavior: no lock is acquired, and\nNULL is returned if the open fails; it's up to the caller to check that\nand do something appropriate. Otherwise, the routines will check for\nopen failure and raise a standard elog(ERROR) if they do not succeed;\nfurthermore, they will acquire the specified type of lock on the\nrelation before returning. (And, thanks to the already-in-place call\nin LockRelation, any pending SI-inval messages will be read.)\n\nheap_close will also take an additional parameter, which is a lock type\nto release the specified lock, or NoLock to release no lock. (Note\nthat it is often correct not to release the lock acquired at heap_open\nduring heap_close; in this case, the lock is held till end of\ntransaction. So, we don't want heap_close to automagically release\nwhatever lock was acquired by the corresponding heap_open, even if it\nwere easy to do so which it isn't...)\n\nIf I don't hear any objections, I'll get on with implementing that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 16:51:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT "
},
{
"msg_contents": "Sounds like a good plan. I found it quite easy to make changes link\nthis because mkid, and your tool glimpse, lets you pull up all functions\nthat contain a certain function or call to that function, pull them into\nan editor, and away you go.\n\n\n> Here's what I propose:\n> \n> Add another parameter to heap_open and heap_openr, which can be any of\n> the lock types currently mentioned in storage/lmgr.h, or \"NoLock\".\n> With \"NoLock\" you get the current behavior: no lock is acquired, and\n> NULL is returned if the open fails; it's up to the caller to check that\n> and do something appropriate. Otherwise, the routines will check for\n> open failure and raise a standard elog(ERROR) if they do not succeed;\n> furthermore, they will acquire the specified type of lock on the\n> relation before returning. (And, thanks to the already-in-place call\n> in LockRelation, any pending SI-inval messages will be read.)\n> \n> heap_close will also take an additional parameter, which is a lock type\n> to release the specified lock, or NoLock to release no lock. (Note\n> that it is often correct not to release the lock acquired at heap_open\n> during heap_close; in this case, the lock is held till end of\n> transaction. So, we don't want heap_close to automagically release\n> whatever lock was acquired by the corresponding heap_open, even if it\n> were easy to do so which it isn't...)\n> \n> If I don't hear any objections, I'll get on with implementing that.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 18:33:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "\nIs this patch still valid?\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> >\n> > Hiroshi Inoue wrote:\n> > >\n> > > > crashtest=> vacuum analyze;\n> > > > NOTICE: Rel pg_type: TID 4/3: InsertTransactionInProgress 129915\n> > > > - can't shrink relation\n> > ...\n> > >\n> > > CREATE TABLE doesn't lock system tables till end of transaction.\n> > > It's a cause of these NOTICE messages.\n> > >\n> > > Should we lock system tables till end of transaction ?\n> >\n> > No, if we allow DDL statements inside BEGIN/END\n> > (in long transaction).\n> >\n> > > Moreover CREATE TABLE doesn't acquire any lock for pg_attribute\n> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > while tuples are inserted into pg_attribute.\n> > > Concurrent vacuum may corrupt pg_attribute.\n> >\n> > Should be fixed!\n> >\n> \n> Seems CREATE TABLE don't acquire any lock for pg_relcheck and\n> pg_attrdef as well as pg_attribute. There may be other pg_.......\n> \n> Here is a patch.\n> This patch also removes UnlockRelation() in heap_destroy_with_catalog().\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> *** catalog/heap.c.orig\tTue Sep 7 08:52:04 1999\n> --- catalog/heap.c\tFri Sep 10 16:43:18 1999\n> ***************\n> *** 547,552 ****\n> --- 547,553 ----\n> \t */\n> \tAssert(rel);\n> \tAssert(rel->rd_rel);\n> + \tLockRelation(rel, AccessExclusiveLock);\n> \thasindex = RelationGetForm(rel)->relhasindex;\n> \tif (hasindex)\n> \t\tCatalogOpenIndices(Num_pg_attr_indices, Name_pg_attr_indices, idescs);\n> ***************\n> *** 607,612 ****\n> --- 608,614 ----\n> \t\tdpp++;\n> \t}\n> \n> + \tUnlockRelation(rel, AccessExclusiveLock);\n> \theap_close(rel);\n> \n> \t/*\n> ***************\n> *** 1330,1336 ****\n> \n> \trel->rd_nonameunlinked = TRUE;\n> \n> - \tUnlockRelation(rel, AccessExclusiveLock);\n> \n> \theap_close(rel);\n> \n> --- 1332,1337 ----\n> ***************\n> *** 1543,1553 ****\n> --- 1544,1556 ----\n> \tvalues[Anum_pg_attrdef_adbin - 1] =\n> PointerGetDatum(textin(attrdef->adbin));\n> \tvalues[Anum_pg_attrdef_adsrc - 1] =\n> PointerGetDatum(textin(attrdef->adsrc));\n> \tadrel = heap_openr(AttrDefaultRelationName);\n> + \tLockRelation(adrel, AccessExclusiveLock);\n> \ttuple = heap_formtuple(adrel->rd_att, values, nulls);\n> \tCatalogOpenIndices(Num_pg_attrdef_indices, Name_pg_attrdef_indices,\n> idescs);\n> \theap_insert(adrel, tuple);\n> \tCatalogIndexInsert(idescs, Num_pg_attrdef_indices, adrel, tuple);\n> \tCatalogCloseIndices(Num_pg_attrdef_indices, idescs);\n> + \tUnlockRelation(adrel, AccessExclusiveLock);\n> \theap_close(adrel);\n> \n> \tpfree(DatumGetPointer(values[Anum_pg_attrdef_adbin - 1]));\n> ***************\n> *** 1606,1616 ****\n> --- 1609,1621 ----\n> \tvalues[Anum_pg_relcheck_rcbin - 1] =\n> PointerGetDatum(textin(check->ccbin));\n> \tvalues[Anum_pg_relcheck_rcsrc - 1] =\n> PointerGetDatum(textin(check->ccsrc));\n> \trcrel = heap_openr(RelCheckRelationName);\n> + \tLockRelation(rcrel, AccessExclusiveLock);\n> \ttuple = heap_formtuple(rcrel->rd_att, values, nulls);\n> \tCatalogOpenIndices(Num_pg_relcheck_indices, Name_pg_relcheck_indices,\n> idescs);\n> \theap_insert(rcrel, tuple);\n> \tCatalogIndexInsert(idescs, Num_pg_relcheck_indices, rcrel, tuple);\n> \tCatalogCloseIndices(Num_pg_relcheck_indices, idescs);\n> + \tUnlockRelation(rcrel, AccessExclusiveLock);\n> \theap_close(rcrel);\n> \n> \tpfree(DatumGetPointer(values[Anum_pg_relcheck_rcname - 1]));\n> \n> \n> \n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:10:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Vacuum analyze bug CAUGHT"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, September 28, 1999 1:11 PM\n> To: Hiroshi Inoue\n> Cc: Vadim Mikheev; Michael Simms; [email protected]\n> Subject: Re: [HACKERS] Vacuum analyze bug CAUGHT\n> \n> \n> \n> Is this patch still valid?\n>\n\nNo,it has been already changed by Tom together with\nmany other changes. \n\nRegards. \n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 28 Sep 1999 13:26:21 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Vacuum analyze bug CAUGHT"
}
] |
[
{
"msg_contents": " Alright, I managed to find out why PostgreSQL dies when run on my Sparc.\nSeems it's hitting a function called tas() and never exiting (that's what\ngdb postmaster <id> is showing from the backtrace anyway). I've managed to\nfind about three or so different places where tas() is defined (using\nassembler). Does anyone have a patch that I can apply to fix this behavior?\n\n Damo\n\n\n",
"msg_date": "Thu, 09 Sep 1999 02:01:00 GMT",
"msg_from": "\"Damond Walker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sparc stuff again."
}
] |
[
{
"msg_contents": "Hi !\n\nI would like to know our I can read the log file.\n\nIs it the file where I can detail all the transactions ?\n\nRegards,\n\nStephane FILLON\n\n\n\n\n\n\n\nHi !\n \nI would like to know our I can read the log \nfile.\n \nIs it the file where I can detail all the \ntransactions ?\n \nRegards,\n \nStephane FILLON",
"msg_date": "Thu, 9 Sep 1999 19:14:40 +1100",
"msg_from": "\"=?iso-8859-1?Q?St=E9phane_FILLON?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to use pg_log ?"
},
{
"msg_contents": "At 07:14 PM 9/9/99 +1100, St�phane FILLON wrote:\n\n\n>I would like to know our I can read the log file.\n>Is it the file where I can detail all the transactions ?\n\nHmmm... I believe the pg_log is just for internal use of Postgres to track \nthe committed and uncommitted transactions. If you want to log your \ntransactions, the only way I can think of off hand is to put the debug \nfunction on the backend. For instance, I might use something like the \nfollowing: (Using Unix in /etc/rc.d/rc.local)\n\nsu postgres -c \"nohup /usr/local/pgsql/bin/postmaster -B 2048 -i -d 2 -D \n/datadir >> /path/to logfile 2>&1 &\"\n\nThe -d option is the one that turns on debugging so you can see the actual \ntransactions. However, it does get quite large on a frequently accessed \ndatabase. I would really only recommend it for debugging client \ncode. Though there most likely is a better way, as I only started using \nPostgres and learning SQL, 6 hours ago.\n\nJust for comments though, excellent Database, was able to make Postgres \nfunction extensions within 30 minutes of tinkering with it. Kudos to all \nthe programmers. The code was extremely well documented and laid out.\n\n\n\nSincerely Yours,\n\n\tJacques Dimanche\n\tVP of Research and Development\n\tTridel Technologies, Inc.\n",
"msg_date": "Fri, 10 Sep 1999 14:25:47 +0800",
"msg_from": "\"Jacques B. Dimanche\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] How to use pg_log ?"
}
] |
[
{
"msg_contents": "> Cope with versions of vsnprintf() written by people who\n> don't read man pages...\n\nRETURN VALUE\n If the output was truncated, the return value is -1,\n otherwise it is the number of characters stored, not\n including the terminating null.\n\nIs this consistant with the behavior you see on Linux? It's a GNU\nlibrary thing...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 09 Sep 1999 12:21:50 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/lib (stringinfo.c)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Cope with versions of vsnprintf() written by people who\n>> don't read man pages...\n\n> RETURN VALUE\n> If the output was truncated, the return value is -1,\n> otherwise it is the number of characters stored, not\n> including the terminating null.\n\n> Is this consistant with the behavior you see on Linux? It's a GNU\n> library thing...\n\nThat is the behavior I saw on my Linux box, but the manpage installed\non the same box sez that the return value is equal to the passed buffer\nsize if there's an overrun. Maybe the manpage is out of date.\n\nAnyway, the fixed code copes with both conventions.\n\nYou'll need to re-initdb to get rid of the broken rules in your\ndatabase, but then things should be OK...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Sep 1999 09:44:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/lib (stringinfo.c) "
},
{
"msg_contents": "> Anyway, the fixed code copes with both conventions.\n> You'll need to re-initdb to get rid of the broken rules in your\n> database, but then things should be OK...\n\nThings look great. Thanks!\n\nbtw, any problem with trimming the numeric test back to taking a few\nseconds, perhaps up to 30 seconds? It takes *way* too long at the\nmoment...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 09 Sep 1999 14:44:25 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/lib (stringinfo.c)"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> btw, any problem with trimming the numeric test back to taking a few\n> seconds, perhaps up to 30 seconds? It takes *way* too long at the\n> moment...\n\nI've been griping about the slowness of the numeric test since it was\nput in. Even a 2x reduction in the time taken would be really helpful.\nJan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 1999 10:21:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] pgsql/src/backend/lib (stringinfo.c) "
}
] |
[
{
"msg_contents": "Hi All, \n\nI just tried to get pgindent to work and I ran into a few snags\n\n1)\n\nI tried the src/tools/pgindent/indent.bsd.patch on two recent\nversions of bsd indent. One from the current version of openbsd, and\none from freebsd RELENG_3. In neither case the patch applied cleanly.\n\n\nThe code in indent around the area of the second patch segment \n*** 186,192 ****\n *e_token++ = *buf_ptr++;\n }\n }\n! if (*buf_ptr == 'L' || *buf_ptr == 'l')\n *e_token++ = *buf_ptr++;\n }\n else\n\nnow looks like this\n\n while (1) {\n if (!(seensfx & 1) &&\n (*buf_ptr == 'U' || *buf_ptr == 'u')) {\n CHECK_SIZE_TOKEN;\n *e_token++ = *buf_ptr++;\n seensfx |= 1;\n continue;\n }\n if (!(seensfx & 2) &&\n (*buf_ptr == 'L' || *buf_ptr == 'l')) {\n CHECK_SIZE_TOKEN;\n if (buf_ptr[1] == buf_ptr[0])\n *e_token++ = *buf_ptr++;\n *e_token++ = *buf_ptr++;\n seensfx |= 2;\n continue;\n }\n break;\n\nWithout understanding what the code is meant to do, I am guessing that\nthe second patch is no longer necessary.\n\nAlso, in the openbsd source the specials buffer is automatically\nresized, so it seems that neither part of the patch is necessary for\nrecent openbsd sources.\n\n2) \n\nI compiled and tried both bsd distributions. And ran into the\nfollowing problem with pgindent.\n\nThe test in pgindent for the gnu vesion doesn't work.\n\nindent -version -npro </dev/null >/dev/null 2>&1\nif [ \"$?\" -eq 0 ]\nthen echo \"You appear to have GNU indent rather than BSD indent.\" >&2\n echo \"See the pgindent/README file for a description of its\nproblems.\" >\n&2\n EXTRA_OPTS=\"-ncdb -bli0 -npcs -cli4\"\nelse echo \"Hope you installed /src/tools/pgindent/indent.bsd.patch.\"\n>&2\n EXTRA_OPTS=\"-bbb -cli1\"\nfi\n\n\nI think that you need to use\nindent --version -npro </dev/null >/dev/null 2>&1\n\nOn my system (Redhat Linux 5.?) I get\n\n aims2-bernie:$ indent --version\n GNU indent 1.9.1\n aims2-bernie:$ echo $?\n 0\n aims2-bernie:$ bsdindent --version\n bsdindent: Command line: unknown parameter \"--version\"\n aims2-bernie:$ echo $?\n 1\n\n( That is with 'bsdindent' as the patched freebsd indent )\n\n\n3) \n\nFinally, the result of running \n\n find . -name '*.[ch]' -type f -print | egrep -v '\\+\\+|/odbc/|s_lock.h'\n| xargs -n100 pgindent\n\non a fresh copy of the 6.5 sources with either the openbsd or patched\nbsd indent is the following\n\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\n./backend/parser/gram.c\nError@5251: #if stack overflow\nError@5252: #if stack overflow\nError@5263: Unmatched #endif\nError@5264: Unmatched #endif\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\nHope you installed /src/tools/pgindent/indent.bsd.patch.\n./interfaces/ecpg/test/header_test.h\nError@19: Stuff missing from end of file.\n\nAre the errors normal or do I still not have a correctly working\nversion?\n\nBernie Frankpitt\n",
"msg_date": "Thu, 09 Sep 1999 19:21:07 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgindent"
},
{
"msg_contents": "> Hi All, \n> \n> I just tried to get pgindent to work and I ran into a few snags\n> }\n> }\n> ! if (*buf_ptr == 'L' || *buf_ptr == 'l')\n> *e_token++ = *buf_ptr++;\n> }\n\n> \n> while (1) {\n> if (!(seensfx & 1) &&\n> (*buf_ptr == 'U' || *buf_ptr == 'u')) {\n> CHECK_SIZE_TOKEN;\n> *e_token++ = *buf_ptr++;\n> seensfx |= 1;\n\n> Also, in the openbsd source the specials buffer is automatically\n> resized, so it seems that neither part of the patch is necessary for\n> recent openbsd sources.\n\nGreat. Your version looks nice. BSDI also has fixed the buffer size\nproblem, but it was easier to just send people a patch to apply, rather\nthan illegally sending out their changes.\n\n> \n> I think that you need to use\n> indent --version -npro </dev/null >/dev/null 2>&1\n> \n> On my system (Redhat Linux 5.?) I get\n> \n> aims2-bernie:$ indent --version\n> GNU indent 1.9.1\n> aims2-bernie:$ echo $?\n> 0\n> aims2-bernie:$ bsdindent --version\n> bsdindent: Command line: unknown parameter \"--version\"\n> aims2-bernie:$ echo $?\n> 1\n> \n> ( That is with 'bsdindent' as the patched freebsd indent )\n\n\nGood. OK, new test is:\n\n\tindent --version </dev/null >/dev/null 2>&1\n\tif [ \"$?\" -eq 0 ]\n\tthen echo \"You do not appear to have 'indent' installed on your\n\tsystem.\" >&2\n\t exit 1\n\tfi\n\n> \n> \n> 3) \n> \n> Finally, the result of running \n> \n> find . -name '*.[ch]' -type f -print | egrep -v '\\+\\+|/odbc/|s_lock.h'\n> | xargs -n100 pgindent\n> \n> on a fresh copy of the 6.5 sources with either the openbsd or patched\n> bsd indent is the following\n> \n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> ./backend/parser/gram.c\n> Error@5251: #if stack overflow\n> Error@5252: #if stack overflow\n> Error@5263: Unmatched #endif\n> Error@5264: Unmatched #endif\n\nThis is expected. Gram.c is generated from gram.y, so there is no real\nneed to indent it.\n\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> Hope you installed /src/tools/pgindent/indent.bsd.patch.\n> ./interfaces/ecpg/test/header_test.h\n> Error@19: Stuff missing from end of file.\n\nI haven't seen the egcs problem. In this case, it is getting confused\nby the inline SQL commands. No cause for concern.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Sep 1999 15:44:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pgindent"
}
] |
[
{
"msg_contents": "> I am physician and I am very interested by possibilities postgresql\n> could offer to medical information management, specially in undeveloped\n> countries.\n> In the userguide you speake about a medical database and I would want to\n> contact people responsable of this project.\n\nI believe that this example was from the days when Postgres was\ndeveloping at Berkeley. I know that there are more recent projects\n(one of our contributors is an administrator at a hospital) and the\nbest way to find out about current projects is to post a question to a\nmailing list.\n\nDoes anyone have something going in the medical info area?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Fri, 10 Sep 1999 13:47:34 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query about postgres medical database"
},
{
"msg_contents": "Hello!\n\n I work for Russian National Research Surgery Centre (www.med.ru). I have\na database of patient data on Novel server, and I am writing and debugging\nprograms to store the data in Postgres. The dedicated host is oper.med.ru.\n But I do not see anything special about the data - for me it is just\ndata that can be stored, searched and retrieved.\n\nOn Fri, 10 Sep 1999, Thomas Lockhart wrote:\n\n> > I am physician and I am very interested by possibilities postgresql\n> > could offer to medical information management, specially in undeveloped\n> > countries.\n> > In the userguide you speake about a medical database and I would want to\n> > contact people responsable of this project.\n> \n> I believe that this example was from the days when Postgres was\n> developing at Berkeley. I know that there are more recent projects\n> (one of our contributors is an administrator at a hospital) and the\n> best way to find out about current projects is to post a question to a\n> mailing list.\n> \n> Does anyone have something going in the medical info area?\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n",
"msg_date": "Fri, 10 Sep 1999 18:24:09 +0400 (MSD)",
"msg_from": "Oleg Broytmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Query about postgres medical database"
},
{
"msg_contents": "\nOn 10-Sep-99 Thomas Lockhart wrote:\n>> I am physician and I am very interested by possibilities postgresql\n>> could offer to medical information management, specially in undeveloped\n>> countries.\n>> In the userguide you speake about a medical database and I would want to\n>> contact people responsable of this project.\n> \n> I believe that this example was from the days when Postgres was\n> developing at Berkeley. I know that there are more recent projects\n> (one of our contributors is an administrator at a hospital) and the\n> best way to find out about current projects is to post a question to a\n> mailing list.\n> \n> Does anyone have something going in the medical info area?\n\nI was the maintainer/developer of postgresql based project for\nCity's Health department of St. Petersburg (RUSSIA),\nThis project includes financial part (medical insurance reports) and\nnumerouse statistical reports \n(sex/age/diagnozis corellation, loads of different clinics and so on)\n\nThis project starts 1995 and it's succesfuly working now but without me ... \n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Fri, 10 Sep 1999 19:21:55 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] Re: Query about postgres medical database"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n\n> On 10-Sep-99 Thomas Lockhart wrote:\n> >> I am physician and I am very interested by possibilities postgresql\n> >> could offer to medical information management, specially in undeveloped\n> >> countries.\n> >> In the userguide you speake about a medical database and I would want to\n> >> contact people responsable of this project.\n> >\n> > I believe that this example was from the days when Postgres was\n> > developing at Berkeley. I know that there are more recent projects\n> > (one of our contributors is an administrator at a hospital) and the\n> > best way to find out about current projects is to post a question to a\n> > mailing list.\n> >\n> > Does anyone have something going in the medical info area?\n>\n> I was the maintainer/developer of postgresql based project for\n> City's Health department of St. Petersburg (RUSSIA),\n> This project includes financial part (medical insurance reports) and\n> numerouse statistical reports\n> (sex/age/diagnozis corellation, loads of different clinics and so on)\n>\n> This project starts 1995 and it's succesfuly working now but without me ...\n>\n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n\nand >Oleg Broytman <[email protected]>\n>Hello!\n\n> I work for Russian National Research Surgery Centre (www.med.ru). I have\n>a database of patient data on Novel server, and I am writing and debugging\n>programs to store the data in Postgres. The dedicated host is oper.med.ru.\n> But I do not see anything special about the data - for me it is just\n>data that can be stored, searched and retrieved.\n\nHello!\nThank you for your messages, I am trying to organize a group in medical info\narea.\nIt would be a very important database sub-group with postgresql ad open code\nsource.\nI think data is specially interesting in this area. It is not only about\ndecisions to use integers or floats in the results of tests. It could be the\ncase to go and work with the definition of this types (Actually only simple\nmathematical relations like grams by liter for tests, and so on ). I will not\nwrite nothing about other areas where storing objects would be of interest (\nmedical imagery or treatement of biological signals... )\n\nBut finishing with science fiction, a medical database could be proposed like\nan exemple of code source for the postgresql_doc mantainers, like in the case\nof the V C++ GUI framework documentation by Bruce Wampler of the\nftp://www.objectcentral.com/wvrefman.121.tar.gz. and be of greate utility for\nend users.\nIt would have a conceptualisation model ( I think like in the v case with\nCoad-Yourdon notation ) and all the code of the database.\nIf it would be possible to start with the database, it would be easier to add\nextensions adapted for users.\n...and...and...\nI am not...not... a hacker but I have a lot...lot... of information about the\nmedical subject.\nThanks,\nRenato\n\n\n\nDmitry Samersoff wrote:\nOn 10-Sep-99 Thomas Lockhart wrote:\n>> I am physician and I am very interested by possibilities postgresql\n>> could offer to medical information management, specially in undeveloped\n>> countries.\n>> In the userguide you speake about a medical database and I would\nwant to\n>> contact people responsable of this project.\n>\n> I believe that this example was from the days when Postgres was\n> developing at Berkeley. I know that there are more recent projects\n> (one of our contributors is an administrator at a hospital) and the\n> best way to find out about current projects is to post a question\nto a\n> mailing list.\n>\n> Does anyone have something going in the medical info area?\nI was the maintainer/developer of postgresql based project for\nCity's Health department of St. Petersburg (RUSSIA),\nThis project includes financial part (medical insurance reports) and\nnumerouse statistical reports\n(sex/age/diagnozis corellation, loads of different clinics and so on)\nThis project starts 1995 and it's succesfuly working now but without\nme ...\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\nand >Oleg Broytman <[email protected]>\n>Hello!\n> I work for Russian National Research Surgery Centre (www.med.ru).\nI have\n>a database of patient data on Novel server, and I am writing and debugging\n>programs to store the data in Postgres. The dedicated host is oper.med.ru.\n> But I do not see anything special about the data - for me it\nis just\n>data that can be stored, searched and retrieved.\nHello!\nThank you for your messages, I am trying to organize a group in medical\ninfo area.\nIt would be a very important database sub-group with postgresql ad\nopen code source.\nI think data is specially interesting in this area. It is not\nonly about decisions to use integers or floats in the results of tests. It\ncould be the case to go and work with the definition of this types (Actually\nonly simple mathematical relations like grams by liter for tests, and so\non ). I will not write nothing about other areas where storing objects\nwould be of interest ( medical imagery or treatement of biological\nsignals... )\nBut finishing with science fiction, a medical database could be proposed\nlike an exemple of code source for the postgresql_doc mantainers, like\nin the case of the V C++ GUI framework documentation by Bruce Wampler\nof the ftp://www.objectcentral.com/wvrefman.121.tar.gz.\nand be of greate utility for end users.\nIt would have a conceptualisation model ( I think like in the v case\nwith Coad-Yourdon notation ) and all the code of the database.\nIf it would be possible to start with the database, it would be easier\nto add extensions adapted for users.\n...and...and...\nI am not...not... a hacker but I have a lot...lot... of information about\nthe medical subject.\nThanks,\nRenato",
"msg_date": "Sun, 12 Sep 1999 10:56:09 +0000",
"msg_from": "renato barrios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Query about postgres medical database"
}
] |
[
{
"msg_contents": "Ummmm\n\nOkee, this may be my setup being weird. I have been working on that\nvacuum analyze bug and that may have killed something in the pg tables.\n\nHowever, I am slightly concerned about this.\n\nI am using 6.5.2 beta (the tarball on the ftp site)\n\n-----------------------------\n\n$ createdb games\n% psql games\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.1 on i586-pc-linux-gnu, compiled by gcc -ggdb ]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: games\n\ngames=> create table game (\ngames-> refnum serial\ngames-> );\nNOTICE: CREATE TABLE will create implicit sequence 'game_refnum_seq' for SERIAL column 'game.refnum'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'game_refnum_key' for table 'game'\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is impossible. Terminating.\n\n--------------------------------------------\n\nHaving top running when I do this, the backend eats all of my\navailable memory in about 5 seconds, before crashing or just aborting\nwhen it has no memory left to alloacte.\n\nNow, I have realised a mistake I made. I set my postmaster to run at:\n\n/usr/bin/postmaster -d 30 -S -N 128 -B 256 -D/var/lib/pgsql/data > /tmp/postmasterout 2> /tmp/postmastererr\n\nI set the debug to be -30 instead of its maximum of -3\n\nOops, it works now, but surely setting the debug level too high by\naccident shouldnt cause a loop that eats everything in sight?\n\nI have tested this switching the debug value to be 30 or 3 and in all\ninstances the 30 crashes it and the 3 does not. I am just concerned\nthat this may be an indication of a real problem that may notjust be\nsomething that happens when the commandline args are set wrongly.\n\nJust something that might bear looking into\n\n\t\t\t\t\t~Michael\n",
"msg_date": "Sat, 11 Sep 1999 01:35:13 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": true,
"msg_subject": "serial type"
},
{
"msg_contents": "Michael Simms <[email protected]> writes:\n> games=> create table game (\n> games-> refnum serial\n> games-> );\n> NOTICE: CREATE TABLE will create implicit sequence 'game_refnum_seq' for SERIAL column 'game.refnum'\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index 'game_refnum_key' for table 'game'\n> pqReadData() -- backend closed the channel unexpectedly.\n\n> I set the debug to be -30 instead of its maximum of -3\n\nActually, 3 is not the maximum: 4 and 5 turn on dumping of parse and\nplan trees.\n\nWhat I find is that the parsetree dump attempt recurses infinitely,\nbecause the parser is producing a parsetree with circular references.\nThe ColumnDef node for refnum has a list of constraints, and one of the\nconstraints is a CONSTR_UNIQUE node that has a keys list that points\nright back at that same ColumnDef node. Try to dump it, and presto:\ninfinite recursion in the node print functions.\n\nI am not sure if this is a mistake in the construction of the parsetree\n(Thomas, what do you think?) or if the node print functions need to be\nmodified. I think it'd be easiest to alter the parsetree, though.\nPerhaps the UNIQUE constraint ought to be attached somewhere else.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 1999 11:56:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] serial type "
},
{
"msg_contents": "> I am not sure if this is a mistake in the construction of the parsetree\n> (Thomas, what do you think?) or if the node print functions need to be\n> modified. I think it'd be easiest to alter the parsetree, though.\n> Perhaps the UNIQUE constraint ought to be attached somewhere else.\n\nIf I understand the problem correctly, the \"column name\" field in the\nconstraint clause attached to the column node is being used to look up\nthe column node, resulting in a recursive infinite loop. Or is\nsomething else happening with direct pointers back to a parent node?\n\nThe CONSTR_UNIQUE node travels from gram.y to analyze.c attached to\nthe column node (it can also be specified as a table constraint, and\nis attached elsewhere for that case). Within transformCreateStmt(), I\nscan through these constraint nodes, filling in missing info, and\ncollecting them in a \"deferred list\" to look at later in the same\nroutine. I don't detach the constraint nodes from the column nodes at\nthat time, though it might be possible to do so.\n\nCan you clarify the problem for me? I'm afraid that I didn't pay\nenough attention to the thread :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 14:16:46 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] serial type"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I am not sure if this is a mistake in the construction of the parsetree\n>> (Thomas, what do you think?) or if the node print functions need to be\n>> modified. I think it'd be easiest to alter the parsetree, though.\n>> Perhaps the UNIQUE constraint ought to be attached somewhere else.\n\n> If I understand the problem correctly, the \"column name\" field in the\n> constraint clause attached to the column node is being used to look up\n> the column node, resulting in a recursive infinite loop. Or is\n> something else happening with direct pointers back to a parent node?\n\nThe problem is with direct pointers in the parse tree: the column\nnode has a list of constraint nodes attached to it, and the UNIQUE\nnode in that list has a keys field that points at the column node.\nThe node print routines try to recurse through this structure, and\nof course it's a never-ending recursion.\n\nBTW, it's not only SERIAL that causes the problem; plain old\n\tcreate table z2 (f1 int4 unique);\nwill crash the backend if you start psql with PGOPTIONS=\"-d5\".\n\nAs I said, I'm not sure if the answer is to change the parsetree\nrepresentation, or to try to make node print/read smarter about\nthis looping structure. But I'd incline to the first --- the\nlooped structure puts all sorts of tree-traversal algorithms at\nrisk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 1999 10:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] serial type "
},
{
"msg_contents": "> The problem is with direct pointers in the parse tree: the column\n> node has a list of constraint nodes attached to it, and the UNIQUE\n> node in that list has a keys field that points at the column node.\n> The node print routines try to recurse through this structure, and\n> of course it's a never-ending recursion.\n> BTW, it's not only SERIAL that causes the problem; plain old\n> create table z2 (f1 int4 unique);\n> will crash the backend if you start psql with PGOPTIONS=\"-d5\".\n\nSure. The same structure is used to represent column *and* table\nconstraints; in the table case there is a need to refer to a column\nfrom within the structure since there is not explicit column context\nfrom a parent to use.\n\nbtw, the following *does* work wrt printing the parse tree:\n\npostgres=> create table tc (i int, unique(i));\nNOTICE: CREATE TABLE/UNIQUE will create implicit index\n 'tc_i_key' for table 'tc'\nCREATE\n\nI suppose I could add code to explicitly unlink UNIQUE constraints\nfrom the column-specific constraints, *or* we could change the\nstructure used for column constraints. I'd prefer mucking with the\nparse tree as shipped from gram.y as little as possible, of course,\nbut your point about trouble lurking with recursive structures is well\ntaken.\n\nSuggestions?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 14 Sep 1999 14:58:30 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] serial type"
}
] |
[
{
"msg_contents": "Following case statement is legal but fails in 6.5.1.\n\ndrop table t1;\nDROP\ncreate table t1(i int);\nCREATE\ninsert into t1 values(-1);\nINSERT 4047465 1\ninsert into t1 values(0);\nINSERT 4047466 1\ninsert into t1 values(1);\nINSERT 4047467 1\n\nselect i,\n case\n when i < 0 then 'minus'\n when i = 0 then 'zero'\n when i > 0 then 'plus'\n else null\n end\nfrom t1;\nERROR: Unable to locate type oid 0 in catalog\n\nnote that:\n\nselect i,\n case\n when i < 0 then 'minus'\n when i = 0 then 'zero'\n when i > 0 then 'plus'\n end\nfrom t1;\n\nalso causes the same error.\n---\nTatsuo Ishii\n",
"msg_date": "Sat, 11 Sep 1999 17:01:27 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "case bug?"
},
{
"msg_contents": "> Following case statement is legal but fails in 6.5.1.\n> select i,\n> case\n> when i < 0 then 'minus'\n> when i = 0 then 'zero'\n> when i > 0 then 'plus'\n> else null\n> end\n> from t1;\n> ERROR: Unable to locate type oid 0 in catalog\n\nHmm. Works OK when *any* of the result values have a type associated\nwith them, and has trouble when they are all of unspecified type,\nwhich afaik can only happen with strings. Patch enclosed; I haven't\ntested much but it *should* be very safe; I had protected against this\ncase elsewhere in the same routine.\n\n(different test values, but same schema)\n\nOriginal code:\n\n select i,\n case\n when i < 0 then 'minus'\n when i = 0 then 'zero'\n when i > 0 then 'plus'::text\n else null\n end\n from t1;\ni|case\n-+----\n1|plus\n2|plus\n3|plus\n(3 rows)\n\nAfter patching:\n\n select i,\n case\n when i < 0 then 'minus'\n when i = 0 then 'zero'\n when i > 0 then 'plus'\n else null\n end\n from t1;\ni|case\n-+----\n1|plus\n2|plus\n3|plus\n(3 rows)\n\nCan you please exercise it and let me know if you are happy? After\nthat I'll commit to CURRENT and RELEASE trees...\n\nOh, I've found another case which has trouble, and have not yet fixed\nit:\n\n insert into t2(i)\n select case when i > 0 then '0' else null end from t1;\nINSERT 0 3\npostgres=> select * from t2;\n i| x\n---------+---\n137173488| \n137173488| \n137173488| \n\nIt's never doing a conversion at all, and is putting (probably) the\npointer to the character string into the int4 result :(\n\nWorks OK when the string type is coerced:\n\n insert into t2(i)\n select case when i > 0 then '0'::int4 else null end from t1;\npostgres=> select * from t2;\n i| x\n---------+---\n 0| \n 0| \n 0| \n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Sat, 11 Sep 1999 13:41:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] case bug?"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Following case statement is legal but fails in 6.5.1.\n> select i,\n> case\n> when i < 0 then 'minus'\n> when i = 0 then 'zero'\n> when i > 0 then 'plus'\n> else null\n> end\n> from t1;\n> ERROR: Unable to locate type oid 0 in catalog\n\nStill there in current sources, too. Looks like it's the \"else null\"\nthat triggers the problem --- probably the code that is resolving the\nfinal output type of the CASE expression isn't coping with a null.\n\nI think this is Lockhart's turf, but I can have a go at it if he hasn't\ngot time to work on it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 1999 11:34:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] case bug? "
},
{
"msg_contents": "Thomas,\n\n> Can you please exercise it and let me know if you are happy? After\n> that I'll commit to CURRENT and RELEASE trees...\n\nLooks ok for me. Thanks.\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 12 Sep 1999 09:57:04 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] case bug? "
}
] |
[
{
"msg_contents": "Michael Simms was kind enough to give me login privileges on his system\nto poke at his problems with vacuum running concurrently with table\ncreate/drop operations. I am not sure why his setup seems to display\nthe problem easier than mine does, but it's certainly true that crashes\noccur very easily there, whereas it often takes many tries for me.\n\nAnyway, I am now convinced that his symptoms are indeed explained by the\nlocking and cache-invalidation problems we have been discussing. I saw\na number of different failures, but they all seemed to trace back to one\nof two common themes:\n\n(1) The non-vacuuming backend crashes because of accessing a\nsystem-relation tuple that isn't in the same place anymore: the tuple\nis found in the local syscache, but the item location recorded there is\nstale because vacuum has moved the tuple, and the non-vacuum process\nhasn't noticed the SI update message for it yet.\n\n(2) The vacuuming backend can fail because of trying to vacuum a\nrelation that's already been deleted. This can be blamed on the known\nbug that DROP TABLE releases its exclusive lock on the target table\nbefore end of transaction.\n\nI expect there are also failures due to the lack-of-lock problems that\nHiroshi recently identified, but I didn't happen to see any of those in\nthe limited number of cases that I watched with the debugger.\n\nSo, it looks like a solution involves two components: first, being more\ncareful to lock system relations appropriately, and second, being sure\nthat SI messages are seen soon enough. I think the read-SI-messages-\nat-lock-time code that's already in place for 6.6 will be sufficient for\nthe second point, if we are religious about acquiring appropriate locks.\n(BTW, I think that in most cases an appropriate lock on a system table\nwill be less strong than AccessExclusiveLock --- Vadim, do you agree?)\n\nOnce we have the changes, the next question is do we want to risk\nback-patching them into 6.5.2? I can see several ways that we could\nproceed:\n1. Back-patch into REL6_5, and postpone 6.5.2 release for a while\n for beta-testing.\n2. Put out 6.5.2 now (since it already has several other useful fixes),\n then back-patch, and release 6.5.3 after a beta-testing interval.\n3. Leave these changes out of 6.5.*, and try to get 6.6 out the door\n soon instead.\n\nI am not eager to hurry 6.6 along --- I have a lot of half-done work\nin the planner/optimizer that I'd like to finish for 6.6. Perhaps\nchoice #2 is the way to go. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 1999 13:57:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixing Simms' vacuum problems"
},
{
"msg_contents": "> Once we have the changes, the next question is do we want to risk\n> back-patching them into 6.5.2? I can see several ways that we could\n> proceed:\n> 1. Back-patch into REL6_5, and postpone 6.5.2 release for a while\n> for beta-testing.\n> 2. Put out 6.5.2 now (since it already has several other useful fixes),\n> then back-patch, and release 6.5.3 after a beta-testing interval.\n> 3. Leave these changes out of 6.5.*, and try to get 6.6 out the door\n> soon instead.\n> \n> I am not eager to hurry 6.6 along --- I have a lot of half-done work\n> in the planner/optimizer that I'd like to finish for 6.6. Perhaps\n> choice #2 is the way to go. Comments?\n> \n> \t\t\tregards, tom lane\n\nI woudl also suggest number 2 would be best for all. It means teh bugfix for\nmy (and potentially other peoples) problems gets fixed before 6.6 but there\nis no delay to the 6.5.2 bugfixes being released.\n\nI am curious, is there a reason that there is not a regular release of the\ndevelopment tree also? I am aware we can get it through CVS to hammer\non it, but releases would be easier in many ways, certainly easier to develop\npatches against.\n\nJust a thought, as it seems that the linux kernel benefits greatly from\nthis approach.\n\nAs a final word, I would like to thank tom for his looking into\nthe problem. I have been really impressed with the responses\nof the postgresql developers, they seem to be a lot more approachable and\nwilling to fix problems than in most other open source systems I have\nseen.\nHopefully when I get a bit more time and get more familiar with the\npostgresql code, I'll be able to actually provide some solutions\ninstead of just breaking it and telling you lot {:-)\n\nThanks!\n\n\t\t\t\t\t~Michael\n",
"msg_date": "Sat, 11 Sep 1999 19:39:38 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing Simms' vacuum problems"
},
{
"msg_contents": "> Once we have the changes, the next question is do we want to risk\n> back-patching them into 6.5.2? I can see several ways that we could\n> proceed:\n> 1. Back-patch into REL6_5, and postpone 6.5.2 release for a while\n> for beta-testing.\n> 2. Put out 6.5.2 now (since it already has several other useful fixes),\n> then back-patch, and release 6.5.3 after a beta-testing interval.\n> 3. Leave these changes out of 6.5.*, and try to get 6.6 out the door\n> soon instead.\n> \n> I am not eager to hurry 6.6 along --- I have a lot of half-done work\n> in the planner/optimizer that I'd like to finish for 6.6. Perhaps\n> choice #2 is the way to go. Comments?\n\nSeems #2 is good choice for me too.\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 12 Sep 1999 09:56:58 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems "
},
{
"msg_contents": "On Sat, 11 Sep 1999, Tom Lane wrote:\n\n> Once we have the changes, the next question is do we want to risk\n> back-patching them into 6.5.2? I can see several ways that we could\n> proceed:\n> 1. Back-patch into REL6_5, and postpone 6.5.2 release for a while\n> for beta-testing.\n> 2. Put out 6.5.2 now (since it already has several other useful fixes),\n> then back-patch, and release 6.5.3 after a beta-testing interval.\n> 3. Leave these changes out of 6.5.*, and try to get 6.6 out the door\n> soon instead.\n> \n> I am not eager to hurry 6.6 along --- I have a lot of half-done work\n> in the planner/optimizer that I'd like to finish for 6.6. Perhaps\n> choice #2 is the way to go. Comments?\n\nOption 2 makes *me* feel the most comfortable...we were holding off on\n6.5.2 due to some things ppl were working on...are those complete? I can\nroll out a 6.5.2 tonight if everyone feel comfortable with it, or wait for\na few days (Wednesday?) to make sure all is iron'd out?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Sep 1999 17:46:54 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems"
},
{
"msg_contents": "On Sat, 11 Sep 1999, Michael Simms wrote:\n\n> I am curious, is there a reason that there is not a regular release of the\n> development tree also? I am aware we can get it through CVS to hammer\n> on it, but releases would be easier in many ways, certainly easier to develop\n> patches against.\n\nftp://ftp.postgresql.org/pub/postgresql-snapshot.tar.gz\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Sep 1999 17:47:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fixing Simms' vacuum problems"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Option 2 makes *me* feel the most comfortable...we were holding off on\n> 6.5.2 due to some things ppl were working on...are those complete? I can\n> roll out a 6.5.2 tonight if everyone feel comfortable with it, or wait for\n> a few days (Wednesday?) to make sure all is iron'd out?\n\nI don't have any more code changes that I want to try to squeeze into\n6.5.2, but I thought Bruce still needed to update the change log etc\netc. Dunno about the rest of the crew; anyone have more to do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Sep 1999 19:32:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > Option 2 makes *me* feel the most comfortable...we were holding off on\n> > 6.5.2 due to some things ppl were working on...are those complete? I can\n> > roll out a 6.5.2 tonight if everyone feel comfortable with it, or wait for\n> > a few days (Wednesday?) to make sure all is iron'd out?\n> \n> I don't have any more code changes that I want to try to squeeze into\n> 6.5.2, but I thought Bruce still needed to update the change log etc\n> etc. Dunno about the rest of the crew; anyone have more to do?\n\nYes, I have to do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Sep 1999 20:06:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems"
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > Option 2 makes *me* feel the most comfortable...we were holding off on\n> > 6.5.2 due to some things ppl were working on...are those complete? I can\n> > roll out a 6.5.2 tonight if everyone feel comfortable with it, or wait for\n> > a few days (Wednesday?) to make sure all is iron'd out?\n> \n> I don't have any more code changes that I want to try to squeeze into\n> 6.5.2, but I thought Bruce still needed to update the change log etc\n> etc. Dunno about the rest of the crew; anyone have more to do?\n> \n\nI have updated everything needed for 6.5.2. Thomas, can you update the\nHISTORY file for 6.5.2. Thanks.\n\nThis is good timing. I just finished a 4-month project yesterday.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Sep 1999 22:47:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems]"
},
{
"msg_contents": "On Sun, 12 Sep 1999, Bruce Momjian wrote:\n\n> > The Hermit Hacker <[email protected]> writes:\n> > > Option 2 makes *me* feel the most comfortable...we were holding off on\n> > > 6.5.2 due to some things ppl were working on...are those complete? I can\n> > > roll out a 6.5.2 tonight if everyone feel comfortable with it, or wait for\n> > > a few days (Wednesday?) to make sure all is iron'd out?\n> > \n> > I don't have any more code changes that I want to try to squeeze into\n> > 6.5.2, but I thought Bruce still needed to update the change log etc\n> > etc. Dunno about the rest of the crew; anyone have more to do?\n> > \n> \n> I have updated everything needed for 6.5.2. Thomas, can you update the\n> HISTORY file for 6.5.2. Thanks.\n\nOkay, will wrap 6.5.2 on Tuesday evening then...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Sep 1999 23:54:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems]"
},
{
"msg_contents": "> I don't have any more code changes that I want to try to squeeze into\n> 6.5.2, but I thought Bruce still needed to update the change log etc\n> etc. Dunno about the rest of the crew; anyone have more to do?\n\nI should put in my recent fix for Tatsuo regarding unspecified string\ntypes in case statements. Should get to it this evening (Monday\nmorning, GMT)...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 13 Sep 1999 04:05:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> So, it looks like a solution involves two components: first, being more\n> careful to lock system relations appropriately, and second, being sure\n> that SI messages are seen soon enough. I think the read-SI-messages-\n> at-lock-time code that's already in place for 6.6 will be sufficient for\n> the second point, if we are religious about acquiring appropriate locks.\n> (BTW, I think that in most cases an appropriate lock on a system table\n> will be less strong than AccessExclusiveLock --- Vadim, do you agree?)\n\nExclusiveLock should be ok.\n\nVadim\n",
"msg_date": "Tue, 14 Sep 1999 02:22:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixing Simms' vacuum problems"
}
] |
[
{
"msg_contents": "I have finished applying Mike Ansley's changes for long queries, along\nwith a bunch of my own. The current status is:\n\n* You can send a query string of indefinite length to the backend.\n (This is poorly tested for MULTIBYTE, though; would someone who\n uses MULTIBYTE more than I do try it out?)\n\n* You can get back an EXPLAIN or error message string of indefinite\n length.\n\n* Single lexical tokens within a query are currently limited to 64k\n because of the lexer's use of YY_REJECT. I have not committed any\n of Leon's proposed lexer changes, since that issue still seems\n controversial. I would like to see us agree on a solution.\n (ecpg's lexer has the same problem, of course.)\n\nAlthough I think the backend is in fairly good shape, there are still\na few minor trouble spots. (The rule deparser will blow up at 8K for\nexample --- I have some work to do in there and will fix it when\nI get a chance.)\n\nIn the frontend libraries and clients, both libpq and psql are length-\nlimit-free. I have not looked much at any of the other frontend\ninterface libraries. I suspect that at least odbc and the python\ninterface need work, because quick glimpse searches show suspicious-\nlooking constants:\n\tMAX_QUERY_SIZE\n\tERROR_MSG_LENGTH\n\tSQL_PACKET_SIZE\n\tMAX_MESSAGE_LEN\n\tTEXT_FIELD_SIZE\n\tMAX_VARCHAR_SIZE\n\tDRV_VARCHAR_SIZE\n\tDRV_LONGVARCHAR_SIZE\n\tMAX_BUFFER_SIZE\n\tMAX_FIELDS\n\nThe real problem in the clients is that pg_dump blithely assumes it\nwill never need to deal with strings over MAX_QUERY_SIZE. This is\na bad idea --- it ought to be rewritten to use the expansible-string-\nbuffer facility that now exists in libpq. There may be restrictions\nin the other programs in bin/ as well, though glimpse didn't turn up\nany red flags.\n\nI would like to encourage the odbc and python folks to get rid of the\nlength limitations in their modules; I don't use either and have no\nintention of touching either. I'd like to find a volunteer other than\nmyself to fix pg_dump, too.\n\nNow, all we need is someone to implement multiple-disk-block tuples ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 1999 18:50:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status report: long-query-string changes"
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> * Single lexical tokens within a query are currently limited to 64k\n> because of the lexer's use of YY_REJECT. I have not committed any\n> of Leon's proposed lexer changes, since that issue still seems\n> controversial. I would like to see us agree on a solution.\n\nThomas Lockhart should speak up - he seems the only person who\nhas objections yet. If the proposed thing is to be declined, something\nhas to be applied instead in respect to lexer reject feature and\naccompanying size limits, as well as grammar inconsistency. Seems there\nare only awkward solutions as alternatives. As you probably remember,\nthe proposed change only breaks constructs like 1+-2, which anyone\nin a sane condition should avoid when programming :)\n\nThere are more size restrictions there. I noticed (by simply eyeing the\nlexer source, without testing) that in case of flex lexer \n(FLEX_LEXER being defined in scan.c) lexer can't\nswallow big queries. You (Tom and Michael) aren't using flex,\nare you?\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Sun, 12 Sep 1999 04:22:01 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Leon <[email protected]> writes:\n> There are more size restrictions there. I noticed (by simply eyeing the\n> lexer source, without testing) that in case of flex lexer \n> (FLEX_LEXER being defined in scan.c) lexer can't\n> swallow big queries. You (Tom and Michael) aren't using flex,\n> are you?\n\nHuh? flex is the only lexer that works with the Postgres .l files,\nas far as I know. Certainly it's what I'm using.\n\nIf you're looking at the \"literal\" buffer, that would need to be made\nexpansible, but there's not much point until flex's internal stuff is\nfixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 1999 20:38:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "> I have finished applying Mike Ansley's changes for long queries, along\n> with a bunch of my own. The current status is:\n> \n> * You can send a query string of indefinite length to the backend.\n> (This is poorly tested for MULTIBYTE, though; would someone who\n> uses MULTIBYTE more than I do try it out?)\n\nI'll take care of this.\n---\nTatsuo Ishii\n",
"msg_date": "Sun, 12 Sep 1999 10:00:46 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> If you're looking at the \"literal\" buffer, that would need to be made\n> expansible, but there's not much point until flex's internal stuff is\n> fixed.\n> \n> \n\nLook at this piece of code. It seems that when myinput() had been called\nonce, for the second time it will return 0 even if string isn't\nover yet. Parameter 'max' is 8192 bytes on my system. So the query is \nsimply truncated to that size.\n\n#ifdef FLEX_SCANNER\n/* input routine for flex to read input from a string instead of a file */\nstatic int\nmyinput(char* buf, int max)\n{\n\tint len, copylen;\n\n\tif (parseCh == NULL)\n\t{\n\t\tlen = strlen(parseString);\n\t\tif (len >= max)\n\t\t\tcopylen = max - 1;\n\t\telse\n\t\t\tcopylen = len;\n\t\tif (copylen > 0)\n\t\t\tmemcpy(buf, parseString, copylen);\n\t\tbuf[copylen] = '\\0';\n\t\tparseCh = parseString;\n\t\treturn copylen;\n\t}\n\telse\n\t\treturn 0; /* end of string */\n}\n#endif /* FLEX_SCANNER */\n \n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Sun, 12 Sep 1999 17:12:02 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Leon <[email protected]> writes:\n> Look at this piece of code. It seems that when myinput() had been called\n> once, for the second time it will return 0 even if string isn't\n> over yet.\n\nIt's always a good idea to pull a fresh copy of the sources\nbefore opinionating about what works or doesn't work in someone's\njust-committed changes ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Sep 1999 11:34:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "> Thomas Lockhart should speak up - he seems the only person who\n> has objections yet. If the proposed thing is to be declined, something\n> has to be applied instead in respect to lexer reject feature and\n> accompanying size limits, as well as grammar inconsistency.\n\nHmm. I'd suggest that we go with the \"greedy lexer\" solution, which\ncontinues to gobble characters which *could* be an operator until\nother characters or whitespace are encountered.\n\nI don't recall any compelling cases for which this would be an\ninadequate solution, and we have plenty of time until v6.6 is released\nto discover problems and work out alternatives.\n\nSorry for slowing things up; but fwiw I *did* think about it some more\n;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 13 Sep 1999 03:33:08 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "> Thomas Lockhart should speak up...\n> He knows he'll never have to answer for any of his theories actually\n> being put to test. If they were, they would be contaminated by reality.\n\nYou talkin' to me?? ;)\n\nSo, while you are on the lexer warpath, I'd be really happy if someone\nwould fix the following behavior:\n\n(I'm doing this from memory, but afaik it is close to correct)\n\nFor non-psql applications, such as tcl or ecpg, which do not do any\npre-processing on input tokens, a trailing un-terminated string will\nbe lost, and no error will be detected. For example,\n\nselect * from t1 'abc\n\nsent directly to the server will not fail as it should with that\ngarbage at the end. The lexer is in a non-standard mode after all\ntokens are processed, and the accumulated string \"abc\" is left in a\nbuffer and not sent to yacc/bison. I think you can see this behavior\njust by looking at the lexer code.\n\nA simple fix would be to check the current size after lexing of that\naccumulated string buffer, and if it is non-zero then elog(ERROR) a\ncomplaint. Perhaps a more general fix would be to ensure that you are\nnever in an exclusive state after all tokens are processed, but I'm\nnot sure how to do that.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 13 Sep 1999 03:45:28 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Thomas Lockhart should speak up - he seems the only person who\n> > has objections yet. If the proposed thing is to be declined, something\n> > has to be applied instead in respect to lexer reject feature and\n> > accompanying size limits, as well as grammar inconsistency.\n> \n> Hmm. I'd suggest that we go with the \"greedy lexer\" solution, which\n> continues to gobble characters which *could* be an operator until\n> other characters or whitespace are encountered.\n\n'Xcuse my dumbness ;) , but is it in any way different from \nwhat is proposed (by me and some others?)\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n\n",
"msg_date": "Mon, 13 Sep 1999 18:34:56 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > Thomas Lockhart should speak up...\n> > He knows he'll never have to answer for any of his theories actually\n> > being put to test. If they were, they would be contaminated by reality.\n> \n> You talkin' to me?? ;)\n\nNein, nein! Sei still bitte! :) This is my signature which is a week \nold already :)\n\n> A simple fix would be to check the current size after lexing of that\n> accumulated string buffer, and if it is non-zero then elog(ERROR) a\n> complaint. Perhaps a more general fix would be to ensure that you are\n> never in an exclusive state after all tokens are processed, but I'm\n> not sure how to do that.\n\nThe solution is obvious - to eliminate exclusive states entirely!\nBanzai!!!\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 13 Sep 1999 18:35:12 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > The solution is obvious - to eliminate exclusive states entirely!\n> > Banzai!!!\n> \n> That will complicate the lexer, and make it more brittle and difficult\n> to read, since you will have to, essentially, implement the exclusive\n> states using flags within each element.\n> \n> If you want to try it as an exercise, we *might* find it isn't as ugly\n> as I am afraid it will be, but...\n> \n\nGimme the latest lexer source. (I pay for my Internet on a per \nminute basis, so I can't connect to CVS) You will see what I mean.\n\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n\n",
"msg_date": "Mon, 13 Sep 1999 19:33:27 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
},
{
"msg_contents": "Leon <[email protected]> writes:\n>> A simple fix would be to check the current size after lexing of that\n>> accumulated string buffer, and if it is non-zero then elog(ERROR) a\n>> complaint. Perhaps a more general fix would be to ensure that you are\n>> never in an exclusive state after all tokens are processed, but I'm\n>> not sure how to do that.\n\n> The solution is obvious - to eliminate exclusive states entirely!\n> Banzai!!!\n\nCan we do that? Seems like a more likely approach is to ensure that\nall of the lexer states have rules that ensure they terminate (or\nraise an error, as for unterminated quoted string) at end of input.\nI do think checking the token buffer is a hack, and changing the rules\na cleaner solution...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 10:53:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Leon <[email protected]> writes:\n> >> A simple fix would be to check the current size after lexing of that\n> >> accumulated string buffer, and if it is non-zero then elog(ERROR) a\n> >> complaint. Perhaps a more general fix would be to ensure that you are\n> >> never in an exclusive state after all tokens are processed, but I'm\n> >> not sure how to do that.\n> \n> > The solution is obvious - to eliminate exclusive states entirely!\n> > Banzai!!!\n> \n> Can we do that? Seems like a more likely approach is to ensure that\n> all of the lexer states have rules that ensure they terminate (or\n> raise an error, as for unterminated quoted string) at end of input.\n> I do think checking the token buffer is a hack, and changing the rules\n> a cleaner solution...\n\nHmm, yea, you are right. It is much simpler solution. We can check \ncondition in myinput() and input() when we are going to return \nend-of-input (YYSTATE == INITIAL), and raise an error if that's not so.\nWell, I give up my idea of total extermination of start conditions :)\n\nBTW, while eyeing the scan.l again, I noticed that C - style comments\ncan also contain bugs, but I am not completely sure.\n-- \nLeon.\n-------\nHe knows he'll never have to answer for any of his theories actually \nbeing put to test. If they were, they would be contaminated by reality.\n",
"msg_date": "Mon, 13 Sep 1999 23:19:50 +0500",
"msg_from": "Leon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Status report: long-query-string changes"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry for the intrusion, but the www-based bug tracking seems to be down\n(URL not found by the www server).\nMy problem: I'm testing 6.5.1 on a Linux (old RedHat 4.2,libc5) box.\nI did the regression tests, and int2 ant int4 failed, but int8 was ok.\nBut this is the minor problem, maybe my Linux is outdated...\n\nThe other problem, this seems to be a real one:\n1. I create a table with a primary key\n2. With ALTER TABLE RENAME I change the name of the table...\n3. The name of the primary key index does not follow the table...\n4. When I try to remove the index, no success, even renaming the table\n back does not help (is not possible)\n\nBest regards,\n\n --\n GA'BRIEL, A'kos ([email protected]) \n Forte system administrator of Lufthansa Systems Hungary \n Forte and UNIX consultant\n Phone: (+36-1) 4312-979 FAX: (+36-1) 4312-977\n\nPS: I'll try to install Postgres on a Sun Ultra Enterprise 2*300MHz,\n 1GB RAM machine... If anyone interested, I may supply test results :)\n\n\n",
"msg_date": "Sun, 12 Sep 1999 10:09:25 +0200 (MET DST)",
"msg_from": "Gabriel Akos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible bug..."
},
{
"msg_contents": "Gabriel Akos <[email protected]> writes:\n> I did the regression tests, and int2 ant int4 failed, but int8 was ok.\n\nThey're probably OK, just platform-specific variations in error message\nwording. Did you examine regression.diffs?\n\n> 1. I create a table with a primary key\n> 2. With ALTER TABLE RENAME I change the name of the table...\n> 3. The name of the primary key index does not follow the table...\n\nIt wouldn't, and doesn't need to.\n\n> 4. When I try to remove the index, no success, even renaming the table\n> back does not help (is not possible)\n\nALTER TABLE RENAME is pretty broken, I think --- in current sources it\nfails even worse than above. (Looks like it needs to flush dirty\nbuffers for the rel before changing the name of the underlying Unix\nfiles --- else mdblindwrt fails later on.) You might find that killing\nand restarting the postmaster will bring things back to a consistent\nstate.\n\nIn general, Postgres' support for ALTER TABLE is very weak; there are\na lot of cases that aren't handled correctly. Perhaps someone will\nstep up to the plate and improve it someday.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Sep 1999 12:08:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Possible bug... "
}
] |
[
{
"msg_contents": "\nBefore I dive into this, has anyone else noticed that the SRCH_INC and\nSRCH_LIB options in the templates no longer work?\n\nTo test, I modified template/freebsd as follows:\n\nAROPT:cq\nSHARED_LIB:-fpic -DPIC\nCFLAGS:-O2 -m486 -pipe\nSRCH_INC:/usr/local/include\nSRCH_LIB:\nUSE_LOCALE:no\nDLSUFFIX:.so\nYFLAGS:-d\nYACC:bison -y\n\nAdding /usr/local/include to SRCH_INC, but when I run configure, it\nreports:\n\n- setting CPPFLAGS=\n- setting LDFLAGS=\n\nI'm suspecting this code in configure.in:\n\n[\nrm -f conftest.sh\nsed 's/^\\([A-Za-z_]*\\):\\(.*\\)$/\\1=\"\\2\"/' \"template/$TEMPLATE\" >conftest.sh\n. ./conftest.sh\nrm -f conftest.sh\n]\n\nIsn't working as expected...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Sep 1999 19:39:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "configure & template files ..."
},
{
"msg_contents": "\nFound and fixed...we checked for INCLUDE_DIR being defined, but not\nSRCH_INC...same for libs...\n\nOn Sun, 12 Sep 1999, The Hermit Hacker wrote:\n\n> \n> Before I dive into this, has anyone else noticed that the SRCH_INC and\n> SRCH_LIB options in the templates no longer work?\n> \n> To test, I modified template/freebsd as follows:\n> \n> AROPT:cq\n> SHARED_LIB:-fpic -DPIC\n> CFLAGS:-O2 -m486 -pipe\n> SRCH_INC:/usr/local/include\n> SRCH_LIB:\n> USE_LOCALE:no\n> DLSUFFIX:.so\n> YFLAGS:-d\n> YACC:bison -y\n> \n> Adding /usr/local/include to SRCH_INC, but when I run configure, it\n> reports:\n> \n> - setting CPPFLAGS=\n> - setting LDFLAGS=\n> \n> I'm suspecting this code in configure.in:\n> \n> [\n> rm -f conftest.sh\n> sed 's/^\\([A-Za-z_]*\\):\\(.*\\)$/\\1=\"\\2\"/' \"template/$TEMPLATE\" >conftest.sh\n> . ./conftest.sh\n> rm -f conftest.sh\n> ]\n> \n> Isn't working as expected...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 12 Sep 1999 19:50:57 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] configure & template files ..."
}
] |
[
{
"msg_contents": "Do people want pgaccess updated to 0.98 for the 6.5.2 release?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Sep 1999 20:28:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess update for 6.5.2?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Do people want pgaccess updated to 0.98 for the 6.5.2 release?\n\nIt would be just fine.\nI'm waiting for a german translation this week and everything seems to\nbe ok. No bugs have been discovered.\nWhen should be the package prepared to be inserted into 6.5.2 ?\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n",
"msg_date": "Mon, 13 Sep 1999 06:16:40 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "On Mon, 13 Sep 1999, Constantin Teodorescu wrote:\n\n> Bruce Momjian wrote:\n> > \n> > Do people want pgaccess updated to 0.98 for the 6.5.2 release?\n> \n> It would be just fine.\n> I'm waiting for a german translation this week and everything seems to\n> be ok. No bugs have been discovered.\n> When should be the package prepared to be inserted into 6.5.2 ?\n\nAm going to do a roll-out on Tuesday evenin...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 13 Sep 1999 11:05:25 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Do people want pgaccess updated to 0.98 for the 6.5.2 release?\n> \n> It would be just fine.\n> I'm waiting for a german translation this week and everything seems to\n> be ok. No bugs have been discovered.\n> When should be the package prepared to be inserted into 6.5.2 ?\n\nThe real issue was not if it was stable, but whether a non-bugfix\nrelease of pgaccess was proper for a 6.5.2 release. I haven't heard any\ncomments on that yet. Not sure how I feel either.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 10:36:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "> On Mon, 13 Sep 1999, Constantin Teodorescu wrote:\n> \n> > Bruce Momjian wrote:\n> > > \n> > > Do people want pgaccess updated to 0.98 for the 6.5.2 release?\n> > \n> > It would be just fine.\n> > I'm waiting for a german translation this week and everything seems to\n> > be ok. No bugs have been discovered.\n> > When should be the package prepared to be inserted into 6.5.2 ?\n> \n> Am going to do a roll-out on Tuesday evenin...\n\nI will take this as a \"Yes\", you want the new pgaccess. Adding it now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 10:38:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> The real issue was not if it was stable, but whether a non-bugfix\n> release of pgaccess was proper for a 6.5.2 release. I haven't heard any\n> comments on that yet. Not sure how I feel either.\n\nIt's rock solid stable.\n\nI am using it for 2 weeks and just minor bugs have been reported (some\ncolor problems on Solaris that have been fixed, some messages missing\nfrom translations).\n\nIt will be ready for tomorow to be picked up and included in 6.5.2\ndistribution.\nJust I will need a final confirmation with 10 minutes before someone\ndownloads the .tar.gz to be sure that it will be the right one.\nI'll be (I hope) reachable by e-mail checking my mailbox every 1 minute.\n\nPlease, there is available a bug fix list for 6.5.2 ?\n\nTeo\n",
"msg_date": "Mon, 13 Sep 1999 14:51:16 +0000",
"msg_from": "Constantin Teodorescu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > The real issue was not if it was stable, but whether a non-bugfix\n> > release of pgaccess was proper for a 6.5.2 release. I haven't heard any\n> > comments on that yet. Not sure how I feel either.\n> \n> It's rock solid stable.\n> \n> I am using it for 2 weeks and just minor bugs have been reported (some\n> color problems on Solaris that have been fixed, some messages missing\n> from translations).\n> \n> It will be ready for tomorow to be picked up and included in 6.5.2\n> distribution.\n> Just I will need a final confirmation with 10 minutes before someone\n> downloads the .tar.gz to be sure that it will be the right one.\n> I'll be (I hope) reachable by e-mail checking my mailbox every 1 minute.\n> \n> Please, there is available a bug fix list for 6.5.2 ?\n\nCan I download it now?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 11:05:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: pgaccess update for 6.5.2?"
},
{
"msg_contents": "Bruce,\n\nin HISTORY for 6.5.2 I see:\n\nThis is to re-use space on index pages freed by vacuum(Tom)\n\nisn't this re-use indices after vacuum hack by Vadim ?\n(prevent indices to grow infinitely)\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 13 Sep 1999 20:38:30 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "HISTORY for 6.5.2"
},
{
"msg_contents": "> Bruce,\n> \n> in HISTORY for 6.5.2 I see:\n> \n> This is to re-use space on index pages freed by vacuum(Tom)\n> \n> isn't this re-use indices after vacuum hack by Vadim ?\n> (prevent indices to grow infinitely)\n> \n\nThanks. I have updated the release.sgml and the HISTORY file.\n\nI should have posted the list of changes. Here it is. Gee, it is quite\na lot. Thomas, any way to get the dates from the release.sgml into the\nHISTORY file and the html output?\n\n---------------------------------------------------------------------------\n\n\n\nDetailed Change List\n\n subselect+CASE fixes(Tom)\n Add SHLIB_LINK setting for solaris_i386 and solaris_sparc ports(Daren\n Sefcik)\n Fixes for CASE in WHERE join clauses(Tom)\n Fix BTScan abort(Tom)\n Repair the check for redundant UNIQUE and PRIMARY KEY indices(Tom)\n Improve it so that it checks for multi-column constraints(Tom)\n Fix for Win32 making problem with MB enabled(Hiroki Kataoka)\n Allow BSD yacc and bison to compile pl code(Bruce)\n Fix SET NAMES\n int8 fixes(Thomas)\n Fix vacuum's memory consumption(Tom)\n Reduce the total memory consumption of vacuum(Tom)\n Fix for timestamp(datetime)\n Rule deparsing bugfixes(Tom)\n Fix quoting problems in mkMakefile.tcldefs.sh.in and\n mkMakefile.tkdefs.sh.in(Tom)\n Update frontend libpq to remove limits on query lengths,\n error/notice message lengths, and number of fields per tuple(Tom)\n This is to re-use space on index pages freed by vacuum(Vadim)\n document -x for pg_dump(Bruce)\n Fix for unary operators in rule deparser(Tom)\n Comment out FileUnlink of excess segments during mdtruncate()(Tom)\n Irix linking fix from Yu Cao <[email protected]>\n Repair logic error in LIKE: should not return LIKE_ABORT\n when reach end of pattern before end of text(Tom)\n Repair incorrect cleanup of heap memory allocation during transaction\n abort(Tom)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 13:11:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> in HISTORY for 6.5.2 I see:\n> This is to re-use space on index pages freed by vacuum(Tom)\n> isn't this re-use indices after vacuum hack by Vadim ?\n\nNot mine, for sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 13:36:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HISTORY for 6.5.2 "
},
{
"msg_contents": "Bruce Momjian wrote:\n\n---------------------------------------------------------------------------\n> \n> Detailed Change List\n> \n[snip]\n\nHey, Bruce -- did the patches for Alpha get in?? If not, I'll need to\nmung Ryan K's/Uncle G's patchset against 6.5.1 to work with 6.5.2.\n\nYour friendly RPM packager...\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Mon, 13 Sep 1999 13:42:16 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> ---------------------------------------------------------------------------\n> > \n> > Detailed Change List\n> > \n> [snip]\n> \n> Hey, Bruce -- did the patches for Alpha get in?? If not, I'll need to\n> mung Ryan K's/Uncle G's patchset against 6.5.1 to work with 6.5.2.\n> \n> Your friendly RPM packager...\n\nThe bad news is that those patches are too large/significant for a minor\nrelease. They could affect other platforms adversely, so we can't apply\nthem until 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 13:47:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Bruce Momjian wrote:\n \n> The bad news is that those patches are too large/significant for a minor\n> release. They could affect other platforms adversely, so we can't apply\n> them until 6.6.\n\nOk, mung time, then. That means RedHat will be releasing 6.5.1 with\nRedHat 6.1, unless those patches can be munged relatively easily. Alpha\nis a big platform for RedHat.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Mon, 13 Sep 1999 13:50:33 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> \n> > The bad news is that those patches are too large/significant for a minor\n> > release. They could affect other platforms adversely, so we can't apply\n> > them until 6.6.\n> \n> Ok, mung time, then. That means RedHat will be releasing 6.5.1 with\n> RedHat 6.1, unless those patches can be munged relatively easily. Alpha\n> is a big platform for RedHat.\n> \n\nIt would be very valid to apply Uncle George's patches to a 6.5.2\nalpha-only release. If we had an alpha-only release, we would apply\nthem too.\n\nOur problem is that minor releases don't get the same cross-platform\ntesting as major releases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 13:55:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 13 Sep 1999, Bruce Momjian wrote:\n\n> > Ok, mung time, then. That means RedHat will be releasing 6.5.1 with\n> > RedHat 6.1, unless those patches can be munged relatively easily. Alpha\n> > is a big platform for RedHat.\n> > \n> \n> It would be very valid to apply Uncle George's patches to a 6.5.2\n> alpha-only release. If we had an alpha-only release, we would apply\n> them too.\n\nOh, I understand. Somehow or another I thought 6.5.2 had those patches\nintegrated. My error.\n\n> Our problem is that minor releases don't get the same cross-platform\n> testing as major releases.\n\nI understand -- I am patch-munging as I write this, after booting into RedHat 6\non my notebook. If Ryan K is still around, he may find it easier going, as he\ndid the backpatch to 6.5.1. There are a couple of failed hunks, and some\nreverses. We'll see how it goes.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Mon, 13 Sep 1999 14:00:07 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Repair the check for redundant UNIQUE and PRIMARY KEY indices(Tom)\n> Improve it so that it checks for multi-column constraints(Tom)\n\nThose were Thomas, I believe.\n\n> Fix vacuum's memory consumption(Tom)\n> Reduce the total memory consumption of vacuum(Tom)\n\nAnd I can't take credit for that either --- Hiroshi and/or Tatsuo get\nthe credit, IIRC. (BTW, did we patch those into 6.5.*, or only 6.6?)\n\n> Update frontend libpq to remove limits on query lengths,\n> error/notice message lengths, and number of fields per tuple(Tom)\n\nThis is *not* in 6.5.*, only 6.6.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 16:56:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2 "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Hey, Bruce -- did the patches for Alpha get in?? If not, I'll need to\n>> mung Ryan K's/Uncle G's patchset against 6.5.1 to work with 6.5.2.\n\n> The bad news is that those patches are too large/significant for a minor\n> release. They could affect other platforms adversely, so we can't apply\n> them until 6.6.\n\nAlso, most of them were patches for the portability problems in our\nfunction call interface. I still have hopes of redesigning the fmgr\ninterface completely before 6.6 --- see my message in the hackers\narchives for 6/14/99. If we get that done then there'll be no need for\nthe associated patches. (If we don't, we'd better apply the patches,\nsince Alpha is not the only platform where there are problems...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 17:01:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2 "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Repair the check for redundant UNIQUE and PRIMARY KEY indices(Tom)\n> > Improve it so that it checks for multi-column constraints(Tom)\n> \n> Those were Thomas, I believe.\n\nDone.\n\n> \n> > Fix vacuum's memory consumption(Tom)\n> > Reduce the total memory consumption of vacuum(Tom)\n> \n> And I can't take credit for that either --- Hiroshi and/or Tatsuo get\n> the credit, IIRC. (BTW, did we patch those into 6.5.*, or only 6.6?)\n\nAh, let's give them both credit.\n\n> \n> > Update frontend libpq to remove limits on query lengths,\n> > error/notice message lengths, and number of fields per tuple(Tom)\n> \n> This is *not* in 6.5.*, only 6.6.\n\nSorry. Shows up in the that tag log, so something must be in there. I\ndon't make this stuff up. :-)\n\nChecking...\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 18:38:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> And I can't take credit for that either --- Hiroshi and/or Tatsuo get\n> the credit, IIRC. (BTW, did we patch those into 6.5.*, or only 6.6?)\n> \n> > Update frontend libpq to remove limits on query lengths,\n> > error/notice message lengths, and number of fields per tuple(Tom)\n> \n> This is *not* in 6.5.*, only 6.6.\n\nI have:\n\t\n\tRCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq/pqexpbuffer.c,v\n\tWorking file: src/interfaces/libpq/pqexpbuffer.c\n\thead: 1.1\n\tbranch:\n\tlocks: strict\n\taccess list:\n\tsymbolic names:\n\tkeyword substitution: kv\n\ttotal revisions: 1; selected revisions: 1\n\tdescription:\n\t----------------------------\n\trevision 1.1\n\tdate: 1999/08/31 01:37:37; author: tgl; state: Exp;\n\tUpdate frontend libpq to remove limits on query lengths,\n\terror/notice message lengths, and number of fields per tuple. Add\n\tpqexpbuffer.c/.h, a frontend version of backend's stringinfo module.\n\tThis is first step in applying Mike Ansley's long-query patches,\n\teven though he didn't do any of these particular changes...\n\nSeems addition of text files shows up in all branches, which I guess\nmakes sense. No cause for alarm.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 21:03:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> This is *not* in 6.5.*, only 6.6.\n\n> I have:\n> \tRCS file: /usr/local/cvsroot/pgsql/src/interfaces/libpq/pqexpbuffer.c,v\n> \tWorking file: src/interfaces/libpq/pqexpbuffer.c\n> \thead: 1.1\n\n> Seems addition of text files shows up in all branches, which I guess\n> makes sense. No cause for alarm.\n\nI've noticed that cvs log seems a little bogus in its handling of\nbranches, even though cvs status and cvs update know perfectly well\nwhat belongs to the local branch and what doesn't. If you say\n\"cvs log\" you get the \"current\" (tip revision's) log entries, even\nif you are in a directory that's been checked out with a sticky\nbranch tag. If you say \"cvs log -rBRANCH\" you get the branch's\nlog entries, but *only* those applied since the branch was split;\nthere doesn't seem to be any way to get the full history as seen\nin this branch. And, as above, it's quite confusing about files\nthat are not in the local branch at all.\n\nDunno if the cvs folk consider these behaviors bugs or features,\nbut they're definitely something to be wary of when working in\na branch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 1999 09:53:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2 "
},
{
"msg_contents": "On Mon, 13 Sep 1999, Lamar Owen wrote:\n\n> > Our problem is that minor releases don't get the same cross-platform\n> > testing as major releases.\n> \n> I understand -- I am patch-munging as I write this, after booting into RedHat 6\n> on my notebook. If Ryan K is still around, he may find it easier going, as he\n> did the backpatch to 6.5.1. There are a couple of failed hunks, and some\n> reverses. We'll see how it goes.\n\n\tYea, I am still around, though a bit busy with school at the\nmoment. I should be able to get 6.5.2beta downloaded and the alpha patches\nupdated for it by Monday if you want me to try. Or, if you get an updated\nalpha patch before then, email it to me, and I will try it out. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Tue, 14 Sep 1999 16:44:30 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 14 Sep 1999, Ryan Kirkpatrick wrote:\n> \tYea, I am still around, though a bit busy with school at the\n> moment. I should be able to get 6.5.2beta downloaded and the alpha patches\n> updated for it by Monday if you want me to try. Or, if you get an updated\n> alpha patch before then, email it to me, and I will try it out. TTYL.\n\nYou know that code far better than I; if you have time, applying those patches\nto the final 6.5.2 would be a nice thing. I'm applying my time to getting RPM\nupgrading working -- many thanks to Oliver Elphick for the Debian upgrade\nscripts, some of which are very useful even in an RPM context.\n\nThose 6.5.1 patches made alot of people very happy -- the guys at RedHat in\nparticular. Bravo to Uncle George for them originally, and bravo to you for\nthe backpatch and packaging to 6.5.1! Those patches, incidentally, will ship\nwith RedHat 6.1, if nothing happens between now and release time.\n\nTIA!\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Tue, 14 Sep 1999 18:11:58 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 14 Sep 1999, Lamar Owen wrote:\n\n> On Tue, 14 Sep 1999, Ryan Kirkpatrick wrote:\n> > \tYea, I am still around, though a bit busy with school at the\n> > moment. I should be able to get 6.5.2beta downloaded and the alpha patches\n> > updated for it by Monday if you want me to try. Or, if you get an updated\n> > alpha patch before then, email it to me, and I will try it out. TTYL.\n> \n> You know that code far better than I; if you have time, applying those patches\n> to the final 6.5.2 would be a nice thing. I'm applying my time to getting RPM\n> upgrading working -- many thanks to Oliver Elphick for the Debian upgrade\n> scripts, some of which are very useful even in an RPM context.\n\n\tOk, I will do so this weekend, providing there are not too many\nthings broken. I will let you know on Monday what the status is. :)\nThough, is the 6.5.2beta tar ball on the ftp site the one I want to work\nwith, or do I want to go from something from CVS? And if so, what is the\nrelevant tag?\n\n> Those 6.5.1 patches made alot of people very happy -- the guys at RedHat in\n> particular. Bravo to Uncle George for them originally, and bravo to you for\n> the backpatch and packaging to 6.5.1! Those patches, incidentally, will ship\n> with RedHat 6.1, if nothing happens between now and release time.\n\n\tGood to hear, and you are welcome! Hopefully by 6.6 the patches\nwill not be needed and Linux/Alpha will finally be a fully supported pgsql\nplatform!\n\n\tPS. Now that I made people at RedHat happy, can I get some of\nthier stock? :) **Just kidding**\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 15 Sep 1999 18:49:16 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Ryan Kirkpatrick wrote:\n> Ok, I will do so this weekend, providing there are not too many\n> things broken. I will let you know on Monday what the status is. :)\n\nThe current tarball is 6.5.2RC (for Release Candidate). AFAIK, this\nwill become the release of 6.5.2, unless there are problems.\n\nThanks much!\n\n> Good to hear, and you are welcome! Hopefully by 6.6 the patches\n> will not be needed and Linux/Alpha will finally be a fully supported pgsql\n> platform!\n\nYes! Now if I just HAD one.... ;-)\n\n> PS. Now that I made people at RedHat happy, can I get some of\n> thier stock? :) **Just kidding**\n\nROTFL.... I _wish_...\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 16 Sep 1999 15:37:37 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Lamar Owen wrote:\n> \n> Ryan Kirkpatrick wrote:\n> > Ok, I will do so this weekend, providing there are not too many\n> > things broken. I will let you know on Monday what the status is. :)\n> \n> The current tarball is 6.5.2RC (for Release Candidate). AFAIK, this\n> will become the release of 6.5.2, unless there are problems.\n> \n> Thanks much!\n> \n> > Good to hear, and you are welcome! Hopefully by 6.6 the patches\n> > will not be needed and Linux/Alpha will finally be a fully supported pgsql\n> > platform!\n> \n> Yes! Now if I just HAD one.... ;-)\n\nDont know if it's been raised before, but the postgres utilities are installed\ninto /usr/bin from the rpm. Problem with this is the naming of some of the \nutilities eg.createuser, destroyuser. These could be confused with the \n'standard' user utilities such as useradd, userdel etc. How about pre-pending \na 'pg' to all postgres utilities so that these become pgcreateuser, \npgdestroyuser etc.?\n\n--------\nRegards\nTheo\n",
"msg_date": "Fri, 17 Sep 1999 08:53:52 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Thu, 16 Sep 1999, Lamar Owen wrote:\n\n> Ryan Kirkpatrick wrote:\n> > Ok, I will do so this weekend, providing there are not too many\n> > things broken. I will let you know on Monday what the status is. :)\n> \n> The current tarball is 6.5.2RC (for Release Candidate). AFAIK, this\n> will become the release of 6.5.2, unless there are problems.\n\n\tOk, I grabbed 6.5.2, and after only minor trouble, have an\nalpha-patched version. The biggest changes to the patch was that a few\nof the \"safe\" changes made by the 6.5.1 alpha patches have made thier way\ninto the source tree (i.e. CPU defintions in the configure and makefiles).\nOnly one instance where changes in actual source code broke and patch, and\nthat instance was trival.\n\tI am running regression tests on the 6.5.2 alpha patched binaries\nnow. Once they pass, I will post the patch to the list.\n\n> > Good to hear, and you are welcome! Hopefully by 6.6 the patches\n> > will not be needed and Linux/Alpha will finally be a fully supported pgsql\n> > platform!\n> \n> Yes! Now if I just HAD one.... ;-)\n\n\tThey are nice machines... And there is a nice range of price/perf\non them, everything from low end inexpensive machine to top of the end\nbank-busting machines. :) If you are really interested in playing around\nwith an Alpha, I would recommend something like an AS200 at the low end,\nor a PC164LX at the mid end. Don't waste your time on a UDB though, they\nare more trouble then they are worth (overheat and die after a few years\n:( ).\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n",
"msg_date": "Sat, 18 Sep 1999 09:58:48 -0500 (CDT)",
"msg_from": "Ryan Kirkpatrick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Fri, 17 Sep 1999, Theo Kramer wrote:\n> Dont know if it's been raised before, but the postgres utilities are installed\n> into /usr/bin from the rpm. Problem with this is the naming of some of the \n> utilities eg.createuser, destroyuser. These could be confused with the \n> 'standard' user utilities such as useradd, userdel etc. How about pre-pending \n> a 'pg' to all postgres utilities so that these become pgcreateuser, \n> pgdestroyuser etc.?\n\nThis is an interesting idea.\n\nWhat is also interesting is that if you have a traditional postgresql\ninstallation (/usr/local/pgsql), you can get even wierder results if /usr/bin\ncontains one createuser and /usr/local/pgsql/bin contains another. Depending\nupon your PATH, you could get unwanted results in a hurry.\n\nSo, it IS an interesting thought -- while it would initially create a good deal\nof confusion, what is the consensus of the hackers on this issue?? Prepending\n\"pg_\" to all postgresql commands seems to me to be a good idea (after all, we\nalready hav pg_dump, pg_dumpall, pg_upgrade, etc.).\n\nThoughts??\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sat, 18 Sep 1999 16:27:08 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> So, it IS an interesting thought -- while it would initially create a\n> good deal of confusion, what is the consensus of the hackers on this\n> issue?? Prepending \"pg_\" to all postgresql commands seems to me to be\n> a good idea (after all, we already hav pg_dump, pg_dumpall,\n> pg_upgrade, etc.).\n\nI don't see a need to change the names of psql or ecpg, which just\nhappen to be the things most commonly invoked by users. I'd be in favor\nof prepending pg_ to all the \"admin-type\" commands like createuser.\nEspecially the createXXX/destroyXXX/initXXX ones, which seem the most\nlikely to cause naming conflicts.\n\nWhile we are thinking about this, I wonder if it wouldn't be a good idea\nto separate out the executables that aren't really intended to be\nexecuted willy-nilly, and put them in a different directory.\npostmaster, postgres, and initdb have no business being in users' PATH\nat all, ever. You could make a case that some of the other executables\nare admin tools not intended for ordinary mortals, as well, and should\nnot live in a directory that might be put in users' PATH.\n\nOf course, the other way an admin can handle that issue is not to put\n/usr/local/pgsql/bin into PATH, but to make symlinks from a more popular\ndirectory (say, /usr/local/bin) for the programs that users are expected\nto execute. I suppose such an admin could stick pg_ on the front of the\nsymlinks anyway. But then the program names don't match the\ndocumentation we supply, which would be confusing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 1999 17:00:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2 "
},
{
"msg_contents": "On Sat, 18 Sep 1999, Tom Lane wrote:\n> While we are thinking about this, I wonder if it wouldn't be a good idea\n> to separate out the executables that aren't really intended to be\n> executed willy-nilly, and put them in a different directory.\n> postmaster, postgres, and initdb have no business being in users' PATH\n> at all, ever. \n\nSuch as /usr/sbin on a Linux FSSTND-compliant system (such as RedHat). In\nfact, I may just do that with the RPM distribution (after consulting with RedHat\non the issue). Thomas?? The same goes for the admin commands' man pages --\nthey should be in section 8 on the typical Linux box.\n\n> to execute. I suppose such an admin could stick pg_ on the front of the\n> symlinks anyway. But then the program names don't match the\n> documentation we supply, which would be confusing.\n\nWell, as things stand, the documentation and the rpm distribution don't match\nin other areas -- I personally would have absolutely no problem whatsoever in\ndoing such a renaming -- hey, I can do such inside the RPM, for that matter,\nbut I don't want to. Of course, I would follow whatever the core group decides\n-- that is the standard. I'm just tossing ideas.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sat, 18 Sep 1999 17:15:07 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> Lamar Owen <[email protected]> writes:\n> > So, it IS an interesting thought -- while it would initially create a\n> > good deal of confusion, what is the consensus of the hackers on this\n> > issue?? Prepending \"pg_\" to all postgresql commands seems to me to be\n> > a good idea (after all, we already hav pg_dump, pg_dumpall,\n> > pg_upgrade, etc.).\n> \n> I don't see a need to change the names of psql or ecpg, which just\n> happen to be the things most commonly invoked by users. I'd be in favor\n> of prepending pg_ to all the \"admin-type\" commands like createuser.\n> Especially the createXXX/destroyXXX/initXXX ones, which seem the most\n> likely to cause naming conflicts.\n\nI have been thinking, the destroy should be drop, in keeping with SQL. \ndestroy was a QUEL'ism.\n\n\n> While we are thinking about this, I wonder if it wouldn't be a good idea\n> to separate out the executables that aren't really intended to be\n> executed willy-nilly, and put them in a different directory.\n> postmaster, postgres, and initdb have no business being in users' PATH\n> at all, ever. You could make a case that some of the other executables\n> are admin tools not intended for ordinary mortals, as well, and should\n> not live in a directory that might be put in users' PATH.\n\nSeems like it could make it harder for newbies.\n\n> Of course, the other way an admin can handle that issue is not to put\n> /usr/local/pgsql/bin into PATH, but to make symlinks from a more popular\n> directory (say, /usr/local/bin) for the programs that users are expected\n> to execute. I suppose such an admin could stick pg_ on the front of the\n> symlinks anyway. But then the program names don't match the\n> documentation we supply, which would be confusing.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Sep 1999 17:38:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> Such as /usr/sbin on a Linux FSSTND-compliant system (such as RedHat). In\n> fact, I may just do that with the RPM distribution (after consulting with RedHat\n> on the issue).\n\nActually, I would even advocate what GNU configure calls the \"libexec\"\ndirectory---a directory like /usr/lib/emacs/i386-linux, which has\nmovemail and a couple of other things that aren't meant to be run by\nusers, but invoked by other programs.\n\nMike.\n",
"msg_date": "18 Sep 1999 18:38:55 -0400",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> > While we are thinking about this, I wonder if it wouldn't be a good idea\n> > to separate out the executables that aren't really intended to be\n> > executed willy-nilly, and put them in a different directory.\n> > postmaster, postgres, and initdb have no business being in users' PATH\n> > at all, ever.\n> Such as /usr/sbin on a Linux FSSTND-compliant system (such as RedHat). In\n> fact, I may just do that with the RPM distribution (after consulting with RedHat\n> on the issue). Thomas?? The same goes for the admin commands' man pages --\n> they should be in section 8 on the typical Linux box.\n\nMan page sections can be reassigned for the next release. afaik\n/usr/sbin tends to contain programs executed by root, which is not\nusually the case for Postgres. Is there a precedent for other programs\nof this type in that directory?\n\n> > I suppose such an admin could stick pg_ on the front of the\n> > symlinks anyway. But then the program names don't match the\n> > documentation we supply, which would be confusing.\n\nUnderscores in program names suck. To paraphrase Ali, \"no opinion,\njust fact\" ;) \n\nIf we are going to rename programs wholesale, let's do it for release\n7.0, and if we must have \"pg\" in front of everything, then do it as,\ne.g. \"pgcreateuser\". We could rename \"pg_dump\" as \"pgdump\" at the same\ntime.\n\nbtw, is it only me or do other people refer to this as \"pig dump\"?\n\n> Well, as things stand, the documentation and the rpm distribution don't match\n> in other areas -- I personally would have absolutely no problem whatsoever in\n> doing such a renaming -- hey, I can do such inside the RPM, for that matter,\n> but I don't want to. Of course, I would follow whatever the core group decides\n> -- that is the standard. I'm just tossing ideas.\n\nThe docs don't claim to match the rpm (or any other real system; as\nthe intro claims it is just used as an example). The docs *do* claim\nto know about what program you should run, so those names should never\nchange unless done in the official distro.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 19 Sep 1999 05:51:22 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> > on the issue). Thomas?? The same goes for the admin commands' man pages --\n> > they should be in section 8 on the typical Linux box.\n> \n> Man page sections can be reassigned for the next release. afaik\n> /usr/sbin tends to contain programs executed by root, which is not\n> usually the case for Postgres. Is there a precedent for other programs\n> of this type in that directory?\n\nIIRC majordomo puts the whole slew of commands in the same directory, usually\n/usr/local/bin when you install it. Most of these are not really user commands\n\n> btw, is it only me or do other people refer to this as \"pig dump\"?\n\nI try and steer clear of pig dump in all its forms {;-)\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Sun, 19 Sep 1999 07:14:57 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Sun, 19 Sep 1999, Thomas Lockhart wrote:\n> > Such as /usr/sbin on a Linux FSSTND-compliant system (such as RedHat). In\n> > fact, I may just do that with the RPM distribution (after consulting with RedHat\n> > on the issue). Thomas?? The same goes for the admin commands' man pages --\n> > they should be in section 8 on the typical Linux box.\n> \n> Man page sections can be reassigned for the next release. afaik\n> /usr/sbin tends to contain programs executed by root, which is not\n> usually the case for Postgres. Is there a precedent for other programs\n> of this type in that directory?\n\nThe uucp programs uuxqt and uucico live in /usr/sbin (on RedHat 6). They are\nowned by and executed as user uucp. See other message for FHS quote re:\n/usr/sbin.\n\n> Underscores in program names suck. To paraphrase Ali, \"no opinion,\n> just fact\" ;) \n\nI thought VACUUM sucked.... ;-P In all seriousness, I totally agree -- either\nreplace the _ with -, or drop it altogether.\n\n> If we are going to rename programs wholesale, let's do it for release\n> 7.0, and if we must have \"pg\" in front of everything, then do it as,\n> e.g. \"pgcreateuser\". We could rename \"pg_dump\" as \"pgdump\" at the same\n> time.\n\nSounds good to me.\n\n> btw, is it only me or do other people refer to this as \"pig dump\"?\n\nWorse -- I see '/usr/lib/pgsql' and say \"user-lib-pigsqueal.\"\n\nSo, with have a var-lib-pigsqueal, user-lib-pigsqueal, and a\nuser-local-pigsqueal. Yuck.\n \n> The docs don't claim to match the rpm (or any other real system; as\n> the intro claims it is just used as an example). The docs *do* claim\n> to know about what program you should run, so those names should never\n> change unless done in the official distro.\n\nAgreed. Like I said, I'm just tossing some ideas -- if they make it in, Ok, if\nnot, Ok. As far as I am concerned, it really doesn't matter -- RedHat has\nnever had a namespace conflict with the PostgreSQL executables residing in\n/usr/bin. The only advantage I see is removing certain admin commands from the\nstandard PATH. Then, for user postgres, add to PATH the admin commands'\nresidence. Make it part of the .profile for user postgres, give postgres a\ndifferent home (under RedHat, ~postgres is currently /var/lib/pgsql), and things\nshould work fine.\n\nLamar Owen\nWGCR Internet Radio\n",
"msg_date": "Sun, 19 Sep 1999 15:33:21 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Lamar Owen wrote:\n\n\n> > btw, is it only me or do other people refer to this as \"pig dump\"?\n> \n> Worse -- I see '/usr/lib/pgsql' and say \"user-lib-pigsqueal.\"\n> \n\nThe first time my wife, saw the title of this mail-list she pronounced\npgsql `pig squeal', and was rather upset at the thought of\npgsql-hackers. \n\nBernie Frankpitt\n",
"msg_date": "Sun, 19 Sep 1999 16:13:28 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "\nOn 19-Sep-99 [email protected] wrote:\n> Lamar Owen wrote:\n> \n> \n>> > btw, is it only me or do other people refer to this as \"pig dump\"?\n>> \n>> Worse -- I see '/usr/lib/pgsql' and say \"user-lib-pigsqueal.\"\n>> \n> \n> The first time my wife, saw the title of this mail-list she pronounced\n> pgsql `pig squeal', and was rather upset at the thought of\n ^^^^^^^^^^^\n It's good reason to change postgres's logo - I'can remade site in\npig's colors ;-))))\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 20 Sep 1999 09:53:29 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Underscores in program names suck. To paraphrase Ali, \"no opinion,\n> just fact\" ;)\n> \n> If we are going to rename programs wholesale, let's do it for release\n> 7.0, and if we must have \"pg\" in front of everything, then do it as,\n> e.g. \"pgcreateuser\". We could rename \"pg_dump\" as \"pgdump\" at the same\n> time.\n\nI agree regarding the underscore. I do think that changing the names\nsooner would create less hassle in the long run. Especially now when\nmore and more folk are starting to use postgres.\n\n> btw, is it only me or do other people refer to this as \"pig dump\"?\n\ngrunt :-)\n \n--------\nRegards\nTheo\n",
"msg_date": "Mon, 20 Sep 1999 09:28:40 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": " IIRC majordomo puts the whole slew of commands in the same directory, usually\n /usr/local/bin when you install it. Most of these are not really user commands\n\nMajordomo isn't really the best standard for installation\ndirectories. Please do not follow it as a general guideline.\n\nCheers,\nBrook\n",
"msg_date": "Mon, 20 Sep 1999 10:19:49 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Sat, 18 Sep 1999, Bruce Momjian wrote:\n\n> I have been thinking, the destroy should be drop, in keeping with SQL. \n> destroy was a QUEL'ism.\n\n{create,destroy}{user,db} should be drop'd, personally...admins should use\nthe SQL commands directly...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 18:36:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "\nOn 20-Sep-99 The Hermit Hacker wrote:\n> On Sat, 18 Sep 1999, Bruce Momjian wrote:\n> \n>> I have been thinking, the destroy should be drop, in keeping with SQL. \n>> destroy was a QUEL'ism.\n> \n> {create,destroy}{user,db} should be drop'd, personally...admins should use\n> the SQL commands directly...\n\nI think it'd be better if they were kept. They're really convenient for\nthe newbie (I just introduced someone to PostgreSQL and all the way thru\nwere references to MySQL, including the create user, db, etc. scripts).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 20 Sep 1999 18:18:15 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Vince Vielhaber wrote:\n\n> \n> On 20-Sep-99 The Hermit Hacker wrote:\n> > On Sat, 18 Sep 1999, Bruce Momjian wrote:\n> > \n> >> I have been thinking, the destroy should be drop, in keeping with SQL. \n> >> destroy was a QUEL'ism.\n> > \n> > {create,destroy}{user,db} should be drop'd, personally...admins should use\n> > the SQL commands directly...\n> \n> I think it'd be better if they were kept. They're really convenient for\n> the newbie (I just introduced someone to PostgreSQL and all the way thru\n> were references to MySQL, including the create user, db, etc. scripts).\n\nMy personal dislike for them is that they are incomplete...CREATE USER and\nCREATE DATABASE have a helluva lot of options available to it...using\ncreateuser, you don't know/learn abotu them...\n\nForce the admin to learn what they are doing...if they want to create\nshort cut scripts, let *them* do it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 19:26:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> My personal dislike for them is that they are incomplete...CREATE USER and\n> CREATE DATABASE have a helluva lot of options available to it...using\n> createuser, you don't know/learn abotu them...\n> \n> Force the admin to learn what they are doing...if they want to create\n> short cut scripts, let *them* do it...\n\nBut newbies can't do shortcuts. I think we need to keep it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 18:46:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "\nI think I can safely speak for a newbie and I happen to dislike\ncreatedb etc as well. I started out with postgreSQL with the\nintention of writing an application-specific CORBA front-end\nto it, so I cared most about the C++ interface. The existence of\nthe createdb command confused me for a while, leaving me thinking\nI could do INSERT and SELECT etc from libpq++, but would have\nto resort to UNIX calls to do createdb. \n\n--Yu Cao\n\nOn Mon, 20 Sep 1999, The Hermit Hacker wrote:\n\n> On Mon, 20 Sep 1999, Vince Vielhaber wrote:\n> \n> > \n> > On 20-Sep-99 The Hermit Hacker wrote:\n> > > On Sat, 18 Sep 1999, Bruce Momjian wrote:\n> > > \n> > >> I have been thinking, the destroy should be drop, in keeping with SQL. \n> > >> destroy was a QUEL'ism.\n> > > \n> > > {create,destroy}{user,db} should be drop'd, personally...admins should use\n> > > the SQL commands directly...\n> > \n> > I think it'd be better if they were kept. They're really convenient for\n> > the newbie (I just introduced someone to PostgreSQL and all the way thru\n> > were references to MySQL, including the create user, db, etc. scripts).\n> \n> My personal dislike for them is that they are incomplete...CREATE USER and\n> CREATE DATABASE have a helluva lot of options available to it...using\n> createuser, you don't know/learn abotu them...\n> \n> Force the admin to learn what they are doing...if they want to create\n> short cut scripts, let *them* do it...\n\n",
"msg_date": "Mon, 20 Sep 1999 15:47:16 -0700 (PDT)",
"msg_from": "Yu Cao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> I think I can safely speak for a newbie and I happen to dislike\n> createdb etc as well. I started out with postgreSQL with the\n> intention of writing an application-specific CORBA front-end\n> to it, so I cared most about the C++ interface. The existence of\n> the createdb command confused me for a while, leaving me thinking\n> I could do INSERT and SELECT etc from libpq++, but would have\n> to resort to UNIX calls to do createdb. \n> \n> --Yu Cao\n> \n\nI would have to say that if you 'started out with theintention of writing\na corba front-and' then I dont think you can really speak for newbies.\n\nWhen I started using postgresql I had vaguely heard of odbc and I had\na couple of example queries of SQL.\n\nIf I had had to go to template1 and create database whatever; and THEN\ngo use it, I would have been fairly confused.\n\nThe way I look at it, it is functionality that is THERE already. If you\nremove it, you remove from the overall functionality of postgres. It doesnt\nactually gain anything to remove it. Sure it looks a bit neater, but the end\nuser cares about being able to use it easilly, not if the scripts are\ntechnically pleasing.\n\nI think the problem described above comes from a lack in the documentation,\nor a failure to read the relavent documentation. Having more functionality\nis good. Removing it is counterproductive.\n\nThe arguement that was put forwards of 'if they want scripts they can write\nthem, let the admins learn and do it themselves' is a bad one IMHO. Is\nit really desirable for a dozen people to be forced to write what is\neffectively the same script? When the script is already there anyway?\nWe should be moving towards a lower learning curve to getting a basic\ndatabase up and running, not a higher one. Not all the users WANT to\nhave to write theirown scripts for everything. I know I certainly\ndont.\n\nJust my 2p worth\n\n\t\t\t\t\t\t~Michael\n",
"msg_date": "Tue, 21 Sep 1999 00:20:21 +0100 (BST)",
"msg_from": "Michael Simms <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > My personal dislike for them is that they are incomplete...CREATE USER and\n> > CREATE DATABASE have a helluva lot of options available to it...using\n> > createuser, you don't know/learn abotu them...\n> > \n> > Force the admin to learn what they are doing...if they want to create\n> > short cut scripts, let *them* do it...\n> \n> But newbies can't do shortcuts. I think we need to keep it.\n\nNewbies can't read man pages to type 'CREATE USER <userid>' or 'CREATE\nDATABASE <database>'?\n\nWe're not asking anyone to learn rocket science here...we leave that to\nThomas :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 22:26:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Michael Simms wrote:\n\n> I would have to say that if you 'started out with theintention of writing\n> a corba front-and' then I dont think you can really speak for newbies.\n> \n> When I started using postgresql I had vaguely heard of odbc and I had\n> a couple of example queries of SQL.\n> \n> If I had had to go to template1 and create database whatever; and THEN\n> go use it, I would have been fairly confused.\n\nWhy? How did you learn about the createdb or createuser commands in the\nfirst place? \n\nSection 20 of the INSTALL file could read:\n\n 20. Briefly test that the backend will start and \n run by running it from the command line.\n a. Start the postmaster daemon running in the \n background by typing \n $ cd\n $ nohup postmaster -i > pgserver.log 2>&1 &\n\t b. Connect to the database by typing\n\t\t$ psql template1\n b. Create a database by typing \n $ create database testdb;\n c. Connect to the new database by typing:\n template1=> \\connect testdb\n d. And run a sample query: \n testdb=> SELECT datetime 'now';\n e. Exit psql: \n testdb=> \\q\n f. Remove the test database (unless you will \n want to use it later for other tests): \n testdb=> drop database testdb;\n\nnow the end user knows how to create and drop a database properly...\n\nhell, add in a few extra steps for creating a new user and deleting\nhim...once ppl know the commands exist, they will use them and learn how\nto better use them...\n\nFor 'newbies', they learn about createdb/createuser from the INSTALL\nfile...it doesn't take anything to teach them to do 'CREATE DATABASE' vs\n'createdb', and it gives them the *proper* way to do it...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 22:44:04 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "\nOn 21-Sep-99 The Hermit Hacker wrote:\n> On Mon, 20 Sep 1999, Bruce Momjian wrote:\n> \n>> > My personal dislike for them is that they are incomplete...CREATE USER and\n>> > CREATE DATABASE have a helluva lot of options available to it...using\n>> > createuser, you don't know/learn abotu them...\n>> > \n>> > Force the admin to learn what they are doing...if they want to create\n>> > short cut scripts, let *them* do it...\n>> \n>> But newbies can't do shortcuts. I think we need to keep it.\n> \n> Newbies can't read man pages to type 'CREATE USER <userid>' or 'CREATE\n> DATABASE <database>'?\n\nYou're missing one minor point. It's highly probable you never experienced\nit. The first few days (maybe even couple of weeks) PostgreSQL can be \nintimidating. Most packages install the same way: \n\n./configure\nmake \nmake install\n\nand you can do it from whatever directory you want. Right from the \nbeginning, the postgres installation has you working from a directory\nthat you may not normally keep your sources in (I keep mine in /usr/local/src\nas do many others), working as a user you just created so you're in an\nunfamiliar environment. Then the redirection of the make process (or the\ngmake process) monitoring it with tail.... For the first time installer\nit can be intimidating. Hell, Innd 1.4 was easier to install the first \ntime. After doing it more than once (and using Tom's tip with makefile.custom)\nall of that can be gotten around. Then the regression tests. Lets face\nit, it's a big package - well worth the effort to learn it, but it's still\nbig. So after putting the poor newbie thru all of this trauma you want to \nfurther traumatize him/her with man pages? :)\n\n> We're not asking anyone to learn rocket science here...we leave that to\n> Thomas :)\n\nGood candidate too :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 20 Sep 1999 21:52:29 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "\nOn 21-Sep-99 The Hermit Hacker wrote:\n> On Tue, 21 Sep 1999, Michael Simms wrote:\n> \n>> I would have to say that if you 'started out with theintention of writing\n>> a corba front-and' then I dont think you can really speak for newbies.\n>> \n>> When I started using postgresql I had vaguely heard of odbc and I had\n>> a couple of example queries of SQL.\n>> \n>> If I had had to go to template1 and create database whatever; and THEN\n>> go use it, I would have been fairly confused.\n> \n> Why? How did you learn about the createdb or createuser commands in the\n> first place? \n> \n> Section 20 of the INSTALL file could read:\n> \n> 20. Briefly test that the backend will start and \n> run by running it from the command line.\n> a. Start the postmaster daemon running in the \n> background by typing \n> $ cd\n> $ nohup postmaster -i > pgserver.log 2>&1 &\n> b. Connect to the database by typing\n> $ psql template1\n> b. Create a database by typing \n> $ create database testdb;\n> c. Connect to the new database by typing:\n> template1=> \\connect testdb\n> d. And run a sample query: \n> testdb=> SELECT datetime 'now';\n> e. Exit psql: \n> testdb=> \\q\n> f. Remove the test database (unless you will \n> want to use it later for other tests): \n> testdb=> drop database testdb;\n\ne and f mixed up?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 20 Sep 1999 21:55:16 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> > But newbies can't do shortcuts. I think we need to keep it.\n> \n> Newbies can't read man pages to type 'CREATE USER <userid>' or 'CREATE\n> DATABASE <database>'?\n> \n> We're not asking anyone to learn rocket science here...we leave that to\n> Thomas :)\n\nBut we have to get the newbie started before they are going to dive in\nand learn manuals.\n\nI don't read the manuals until I decide I want to use some new piece of\nsoftware. I am reading the LyX manuals now only after using it for a\nfew weeks and deciding I want to move from troff to lyx.\n\nBecause it is part of getting started, it has to be easy.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 21:55:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Vince Vielhaber wrote:\n\n> \n> On 21-Sep-99 The Hermit Hacker wrote:\n> > On Tue, 21 Sep 1999, Michael Simms wrote:\n> > \n> >> I would have to say that if you 'started out with theintention of writing\n> >> a corba front-and' then I dont think you can really speak for newbies.\n> >> \n> >> When I started using postgresql I had vaguely heard of odbc and I had\n> >> a couple of example queries of SQL.\n> >> \n> >> If I had had to go to template1 and create database whatever; and THEN\n> >> go use it, I would have been fairly confused.\n> > \n> > Why? How did you learn about the createdb or createuser commands in the\n> > first place? \n> > \n> > Section 20 of the INSTALL file could read:\n> > \n> > 20. Briefly test that the backend will start and \n> > run by running it from the command line.\n> > a. Start the postmaster daemon running in the \n> > background by typing \n> > $ cd\n> > $ nohup postmaster -i > pgserver.log 2>&1 &\n> > b. Connect to the database by typing\n> > $ psql template1\n> > b. Create a database by typing \n> > $ create database testdb;\n> > c. Connect to the new database by typing:\n> > template1=> \\connect testdb\n> > d. And run a sample query: \n> > testdb=> SELECT datetime 'now';\n> > e. Exit psql: \n> > testdb=> \\q\n> > f. Remove the test database (unless you will \n> > want to use it later for other tests): \n> > testdb=> drop database testdb;\n> \n> e and f mixed up?\n\n*glare* it was a sample...:P\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 23:10:58 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > > But newbies can't do shortcuts. I think we need to keep it.\n> > \n> > Newbies can't read man pages to type 'CREATE USER <userid>' or 'CREATE\n> > DATABASE <database>'?\n> > \n> > We're not asking anyone to learn rocket science here...we leave that to\n> > Thomas :)\n> \n> But we have to get the newbie started before they are going to dive in\n> and learn manuals.\n\nSection 20 of the INSTALL file *does that*...but get them started the\nright way, using the proper commands is all I'm saying...\n\nHow else is someone going to know about {create,destroy}{user,db} in the\nfirst place, but by reading through the INSTALL file...so change that so\nthat they learn to connect to the database and use proper SQL...\n\nThat is all my point is...we are already telling them how to get started,\nlet's change that to \"how to get started the proper way\"...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 20 Sep 1999 23:13:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> You're missing one minor point. It's highly probable you never experienced\n> it. The first few days (maybe even couple of weeks) PostgreSQL can be \n> intimidating. Most packages install the same way: \n> \n> ./configure\n> make \n> make install\n> \n> and you can do it from whatever directory you want. Right from the \n> beginning, the postgres installation has you working from a directory\n> that you may not normally keep your sources in (I keep mine in /usr/local/src\n> as do many others), working as a user you just created so you're in an\n> unfamiliar environment. Then the redirection of the make process (or the\n> gmake process) monitoring it with tail.... For the first time installer\n> it can be intimidating. Hell, Innd 1.4 was easier to install the first \n> time. After doing it more than once (and using Tom's tip with makefile.custom)\n> all of that can be gotten around. Then the regression tests. Lets face\n> it, it's a big package - well worth the effort to learn it, but it's still\n> big. So after putting the poor newbie thru all of this trauma you want to \n> further traumatize him/her with man pages? :)\n> \n\nI agree our INSTALL is very large. Is there some way we can simplify\nthe install process?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Sep 1999 22:47:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> Force the admin to learn what they are doing...if they want to create\n> short cut scripts, let *them* do it...\n\nDamn. You're going to make me read the docs?\n\n - Thomas, who *always* uses the scripts and would miss them\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 21 Sep 1999 05:59:54 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Thomas Lockhart wrote:\n\n> > Force the admin to learn what they are doing...if they want to create\n> > short cut scripts, let *them* do it...\n> \n> Damn. You're going to make me read the docs?\n\nIMHO...yes. It would sure eliminate the \"how do I change the password\nfor a user\" if the person wanting to change that password had had to read\nthe docs in the first place, and witih know about the 'with password' part\nof 'create user'...\n\n...and, ummm...don't you have the docs memorized yet? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 03:14:40 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 21 Sep 1999, Thomas Lockhart wrote:\n> \n> > > Force the admin to learn what they are doing...if they want to create\n> > > short cut scripts, let *them* do it...\n> >\n> > Damn. You're going to make me read the docs?\n> \n> IMHO...yes. It would sure eliminate the \"how do I change the password\n> for a user\" if the person wanting to change that password had had to read\n> the docs in the first place, and witih know about the 'with password' part\n> of 'create user'...\n\nTo achieve that, you can't just instruct a newbie in INSTALL.TXT to do\n\n$ psql template1\n$> create user alex\n\nbut instead\n\n$ psql template1\n$>\\h create user\n\nor even better\n\nRTFM \n\n;)\n\n------------\nHannu\n",
"msg_date": "Tue, 21 Sep 1999 09:50:59 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Mon, 20 Sep 1999, Bruce Momjian wrote:\n\n> > You're missing one minor point. It's highly probable you never experienced\n> > it. The first few days (maybe even couple of weeks) PostgreSQL can be \n> > intimidating. Most packages install the same way: \n> > \n> > ./configure\n> > make \n> > make install\n> > \n> > and you can do it from whatever directory you want. Right from the \n> > beginning, the postgres installation has you working from a directory\n> > that you may not normally keep your sources in (I keep mine in /usr/local/src\n> > as do many others), working as a user you just created so you're in an\n> > unfamiliar environment. Then the redirection of the make process (or the\n> > gmake process) monitoring it with tail.... For the first time installer\n> > it can be intimidating. Hell, Innd 1.4 was easier to install the first \n> > time. After doing it more than once (and using Tom's tip with makefile.custom)\n> > all of that can be gotten around. Then the regression tests. Lets face\n> > it, it's a big package - well worth the effort to learn it, but it's still\n> > big. So after putting the poor newbie thru all of this trauma you want to \n> > further traumatize him/her with man pages? :)\n> > \n> \n> I agree our INSTALL is very large. Is there some way we can simplify\n> the install process?\n\nYeah, but let me test it first. I have two to install this week so I \nmake sure.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Sep 1999 05:27:25 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> > Force the admin to learn what they are doing...if they want to create\n> > short cut scripts, let *them* do it...\n> \n> Damn. You're going to make me read the docs?\n> \n> - Thomas, who *always* uses the scripts and would miss them\n> \n\nI created a script for testing called newdb which:\n\n\t:\n\tdestroydb \"$@\"\n\tcreatedb \"$@\"\n\nYes, I could do this in psql from template1, but why bother.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 10:01:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> On Tue, 21 Sep 1999, Thomas Lockhart wrote:\n> \n> > > Force the admin to learn what they are doing...if they want to create\n> > > short cut scripts, let *them* do it...\n> > \n> > Damn. You're going to make me read the docs?\n> \n> IMHO...yes. It would sure eliminate the \"how do I change the password\n> for a user\" if the person wanting to change that password had had to read\n> the docs in the first place, and witih know about the 'with password' part\n> of 'create user'...\n> \n> ...and, ummm...don't you have the docs memorized yet? :)\n\n\nCan we add some output to the createdb command to remind people it can\nbe done inside psql?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 10:02:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Bruce Momjian wrote:\n\n> > On Tue, 21 Sep 1999, Thomas Lockhart wrote:\n> > \n> > > > Force the admin to learn what they are doing...if they want to create\n> > > > short cut scripts, let *them* do it...\n> > > \n> > > Damn. You're going to make me read the docs?\n> > \n> > IMHO...yes. It would sure eliminate the \"how do I change the password\n> > for a user\" if the person wanting to change that password had had to read\n> > the docs in the first place, and witih know about the 'with password' part\n> > of 'create user'...\n> > \n> > ...and, ummm...don't you have the docs memorized yet? :)\n> \n> \n> Can we add some output to the createdb command to remind people it can\n> be done inside psql?\n\nHow about something that outputs what its executing? \n\nrunning 'psql -e \"create database <databasename\" template1' or something\nlike that?\n\nOr have the createdb command run 'man createdb' *grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 11:45:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> > Can we add some output to the createdb command to remind people it can\n> > be done inside psql?\n> \n> How about something that outputs what its executing? \n> \n> running 'psql -e \"create database <databasename\" template1' or something\n> like that?\n> \n> Or have the createdb command run 'man createdb' *grin*\n\nI was thinking:\n\n\tSee the CREATE DATABASE command for more options.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 10:52:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2]"
},
{
"msg_contents": "> > Or have the createdb command run 'man createdb' *grin*\n> I was thinking:\n> See the CREATE DATABASE command for more options.\n\nI can't help thinking that this thread is trying to solve a problem\nthat isn't a problem. createdb works fine. You can do more from within\nsql. So what? Someone could, if they thought it was a problem, add\nmore capabilities to createdb, and an admin could choose to use it or\nnot.\n\nSeems to me that it works just fine for most cases right now; sure\ndoes for me...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 21 Sep 1999 15:21:31 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2]"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> it, it's a big package - well worth the effort to learn it, but it's still\n> big. So after putting the poor newbie thru all of this trauma you want to\n> further traumatize him/her with man pages? :)\n\nYou know, in two years I gotten quite cozy with PostgreSQL -- but at the\nbeginning it was not so. I remember how it felt to FIRST install -- it\nWAS intimidating.\n\nLet's just take a look at HOW big postgresql has become: the tarball is\nover six megabytes. It decompresses to around 23 megabytes. That is\nhalf the size of the Linux Kernel sources, twice the size of a minimal\nWindows 95 installation, and three times the size of a complete Windows\n3.1 installation.\n\nMy first hard drive on my ancient TRS-80 model 4 was 10 megabytes, and\nit seemed huge. The PostgreSQL source tree is two times larger than the\nmaximum volume size for that OS! In fact, a source tree that has\ncompleted a make is bigger than the largest volume size for MS-DOS\nversions prior to 4.0!\n\nIt has a ways to go to beat the 260+MB Oracle 8i installation package,\nbut it's still a big package.\n\nIt takes my Pentium 133 laptop running RedHat 6.0 a full 45 minutes to\nbuild an RPM set -- that's a ./configure; make; make install sequence\nwith several other operations added on. A machine that can do over 100\nMIPS takes 45 minutes. Think about it.\n\nI have to say that I agree with Marc on this one, and, Vince, you are\nthe one who convinced me.\n\nIf a newbie (to PostgreSQL) can successfully install from source, then\nsaid newbie won't have a single problem reading a man page. Even with\nthe RPM packaging -- if a newbie can find out that you need to su to\npostgres and run psql to get at the data (or set up some other client),\nthen said newbie really doesn't care whether the create db is a script\nexecuted from the unix shell or an SQL command executed from psql. \nWhether it's a shell script or a psql command is irrelevant -- the\nnewbie is having to learn a new command either way.\n\nIMHO\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Sep 1999 12:05:51 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > big. So after putting the poor newbie thru all of this trauma you want to\n> > further traumatize him/her with man pages? :)\n> >\n> \n> I agree our INSTALL is very large. Is there some way we can simplify\n> the install process?\n\nrpm --install postgresql*.rpm (or dpkg postgresql*.deb)\n\n;-P\n\n(I just HAD to do that.....) And, RPM will successfully install on more\nthan just RedHat Linux. See www.rpm.org\n\nFrankly, the installation (unless you are munging it into an\nFHS-compliant package) is a piece of cake as it is (of course, I say\nthat with two years of PostgreSQL experience under my belt). It\ncertainly is one of the easiest to install amongst packages of equal\nsize.\n\nMost people who come to know PostgreSQL either:\n1.)\tGet it on their Linux CD;\n2.)\tHeard about it from their sysadmin friends;\n3.)\tHeard about it from someone who installed from a Linux CD;\n4.)\tHeard about it from the documentation of whatever application/web\nserver they're using.\n\nIn my case, it was number 4, as RedHat hadn't yet shipped it when I read\nabout it in the AOLserver documentation.\n\nThe only true \"newbies\" are in groups 1 and 3, as the others have some\nexperience with system administration. And for those groups, it is\ngoing in as an RPM or other package (such as Oliver's .deb).\n\nAre there any OS's other than Linux where PostgreSQL is being shipped as\na stock part of the OS?? Given its BSD license, I would think the\n*BSD's would be shipping it.\n\nAll we can really do is provide better and better documentation, which\nhas been getting MUCH better than the halcyon days of 6.1.1 (which was\nmy first version to install).\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Sep 1999 12:18:40 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": " Are there any OS's other than Linux where PostgreSQL is being shipped as\n a stock part of the OS?? Given its BSD license, I would think the\n *BSD's would be shipping it.\n\nNetBSD includes postgresql as a component of the package system.\n\nCheers,\nBrook\n",
"msg_date": "Tue, 21 Sep 1999 10:35:33 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Brook Milligan wrote:\n\n> Are there any OS's other than Linux where PostgreSQL is being shipped as\n> a stock part of the OS?? Given its BSD license, I would think the\n> *BSD's would be shipping it.\n> \n> NetBSD includes postgresql as a component of the package system.\n\nDitto FreeBSD.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Sep 1999 12:44:58 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "Brook Milligan wrote:\n> \n> Are there any OS's other than Linux where PostgreSQL is being shipped as\n> a stock part of the OS?? Given its BSD license, I would think the\n> *BSD's would be shipping it.\n> \n> NetBSD includes postgresql as a component of the package system.\n\nAh! Good.\n\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Sep 1999 13:14:50 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Bruce Momjian wrote:\n\n> > > Can we add some output to the createdb command to remind people it can\n> > > be done inside psql?\n> > \n> > How about something that outputs what its executing? \n> > \n> > running 'psql -e \"create database <databasename\" template1' or something\n> > like that?\n> > \n> > Or have the createdb command run 'man createdb' *grin*\n> \n> I was thinking:\n> \n> \tSee the CREATE DATABASE command for more options.\n\nIn bold, flashing letters? :)\n\nYa, that would be perfect...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 16:02:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2]"
},
{
"msg_contents": "On Tue, 21 Sep 1999, Brook Milligan wrote:\n\n> Are there any OS's other than Linux where PostgreSQL is being shipped as\n> a stock part of the OS?? Given its BSD license, I would think the\n> *BSD's would be shipping it.\n> \n> NetBSD includes postgresql as a component of the package system.\n\nFreeBSD as both ports/package...but not part of standard install...we\ndon't really use it for anything as part of the OS...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 21 Sep 1999 16:19:44 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "> Are there any OS's other than Linux where PostgreSQL is being shipped as\n> a stock part of the OS?? Given its BSD license, I would think the\n> *BSD's would be shipping it.\n> \n> NetBSD includes postgresql as a component of the package system.\n\nBSDI is considering it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Sep 1999 15:26:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: HISTORY for 6.5.2"
},
{
"msg_contents": "On Sep 20, Bruce Momjian mentioned:\n\n> I agree our INSTALL is very large. Is there some way we can simplify\n> the install process?\n\nOn the one hand a database server is probably not your everyday\n./configure && make && make install program you get from freshmeat and you\ndo want to put some time into a proper installation. On the other hand the\nINSTALL file is really way too long and makes for unpleasant reading.\n\nHere are a couple of ideas.\n\n* Chapter 2 \"Ports\" should be moved to the README file (has nothing to do\nwith the actual installation).\n\n* Move the gory details of item 5 (flex) to a separate file (README.flex).\n\n* Move the locale stuff into a separate file.\n\n* Same with Kerberos\n\n* Move the release notes at the end to CHANGELOG.\n\nThat should make the file somewhat smaller, then also it is really to\nverbose at times and is formatted really oddly, at least on my system.\n\nOkay, now I really went out of my way and redid the whole thing. You'll\nfind the result attached. This is merely an idea of what I would consider\nsimpler. I removed some inconsistencies, things that were unnecessary, too\ncomplicated, etc. Okay, I removed a lot of stuff, but most of the stuff\npeople can really figure out themselves if they need them in the first\nplace. And I shrunk the thing to 25%.\n\nPerhaps there should be a separate Install FAQ of the sort \"My shell said\n'export: Command not found.'\" or \"What's a shared library?\" to move the\nreally annoying stuff out of people's ways.\n\nComments?\n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e",
"msg_date": "Tue, 21 Sep 1999 23:46:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "\nOn 21-Sep-99 Peter Eisentraut wrote:\n> On Sep 20, Bruce Momjian mentioned:\n> \n>> I agree our INSTALL is very large. Is there some way we can simplify\n>> the install process?\n> \n> On the one hand a database server is probably not your everyday\n> ./configure && make && make install program you get from freshmeat and you\n> do want to put some time into a proper installation. \n\nI disagree. When you remove the 'change to this user, install from this\ndirectory' stuff and get down to the actual install, it is just as simple\nas ./configure && make && make install. That's how I've done it the last\nfew times. And each time I've enabled different features.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 21 Sep 1999 17:59:07 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "> > I agree our INSTALL is very large. Is there some way we can simplify\n> > the install process?\n> On the one hand a database server is probably not your everyday\n> ./configure && make && make install program you get from freshmeat and you\n> do want to put some time into a proper installation. On the other hand the\n> INSTALL file is really way too long and makes for unpleasant reading.\n> Here are a couple of ideas.\n> That should make the file somewhat smaller, then also it is really to\n> verbose at times and is formatted really oddly, at least on my system.\n\nFormatted oddly, eh? ;)\n\nWe actually generate this file from files in doc/src/sgml/. We\ngenerate RTF from the sgml sources, and then format to ascii text\nusing ApplixWare or <insert your favorite word processor here>. The\nformatting options at that point are fairly limited afaik.\n\n> Okay, now I really went out of my way and redid the whole thing. You'll\n> find the result attached. This is merely an idea of what I would consider\n> simpler. I removed some inconsistencies, things that were unnecessary, too\n> complicated, etc. Okay, I removed a lot of stuff, but most of the stuff\n> people can really figure out themselves if they need them in the first\n> place. And I shrunk the thing to 25%.\n\nSounds great, except for the \"people can figure it out for themselves\"\npart. But as long as the info is available somewhere in the docs, and\nas long as people can get to them somehow if they need, then there is\nprobably no need for them to be in the INSTALL file.\n\n> Perhaps there should be a separate Install FAQ of the sort \"My shell said\n> 'export: Command not found.'\" or \"What's a shared library?\" to move the\n> really annoying stuff out of people's ways.\n\nAnd afaik once people have gotten to that point, the *real* docs will\nhave already been installed, so we can have that info there. I haven't\nhad a chance to look at your specific suggestions/changes, but I'm\nsure they are a step forward.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 22 Sep 1999 15:48:29 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "> > > I agree our INSTALL is very large. Is there some way we can simplify\n> > > the install process?\n> > On the one hand a database server is probably not your everyday\n> > ./configure && make && make install program you get from freshmeat and you\n> > do want to put some time into a proper installation. On the other hand the\n> > INSTALL file is really way too long and makes for unpleasant reading.\n> > Here are a couple of ideas.\n> > That should make the file somewhat smaller, then also it is really to\n> > verbose at times and is formatted really oddly, at least on my system.\n> \n> Formatted oddly, eh? ;)\n> \n> We actually generate this file from files in doc/src/sgml/. We\n> generate RTF from the sgml sources, and then format to ascii text\n> using ApplixWare or <insert your favorite word processor here>. The\n> formatting options at that point are fairly limited afaik.\n\nMy recommendation is to output HTML, and use lynx to output ASCII. I use:\n\n\tlynx -force_html -dump -hiddenlinks=ignore -nolist \"$@\"\n\nYou would be surprised how much better it looks, assuming the HTML is\ngood.\n\n> > Okay, now I really went out of my way and redid the whole thing. You'll\n> > find the result attached. This is merely an idea of what I would consider\n> > simpler. I removed some inconsistencies, things that were unnecessary, too\n> > complicated, etc. Okay, I removed a lot of stuff, but most of the stuff\n> > people can really figure out themselves if they need them in the first\n> > place. And I shrunk the thing to 25%.\n> \n> Sounds great, except for the \"people can figure it out for themselves\"\n> part. But as long as the info is available somewhere in the docs, and\n> as long as people can get to them somehow if they need, then there is\n> probably no need for them to be in the INSTALL file.\n> \n\nOur big problem with the ASCII file is that we can't nest text, like we\ncan in HTML. In HTML, we can say \"Do this, and if it doesn't work,\nclick HERE\", and have a page to describe known problems. This allows\nus to be brief, but give additional detail. Footnotes in a book have a\nsimilar purpose. Maybe we can implement footnotes in the ASCII file. \nThat is how I would do it in LyX.\n\n> > Perhaps there should be a separate Install FAQ of the sort \"My shell said\n> > 'export: Command not found.'\" or \"What's a shared library?\" to move the\n> > really annoying stuff out of people's ways.\n> \n> And afaik once people have gotten to that point, the *real* docs will\n> have already been installed, so we can have that info there. I haven't\n> had a chance to look at your specific suggestions/changes, but I'm\n> sure they are a step forward.\n\nInteresting idea, but you would have to point them to the exact spot. \nThat may be tough.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Sep 1999 12:45:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "\nThomas, can you comment on this. I think he has a good point. Can we\nmove some of the stuff to footnotes at the end of the file? Can you try\nconverting to HTML first, then to text to see if it formats better:\n\n\tlynx -force_html -dump -hiddenlinks=ignore -nolist \"$@\"\n\n\n> On Sep 20, Bruce Momjian mentioned:\n> \n> > I agree our INSTALL is very large. Is there some way we can simplify\n> > the install process?\n> \n> On the one hand a database server is probably not your everyday\n> ./configure && make && make install program you get from freshmeat and you\n> do want to put some time into a proper installation. On the other hand the\n> INSTALL file is really way too long and makes for unpleasant reading.\n> \n> Here are a couple of ideas.\n> \n> * Chapter 2 \"Ports\" should be moved to the README file (has nothing to do\n> with the actual installation).\n> \n> * Move the gory details of item 5 (flex) to a separate file (README.flex).\n> \n> * Move the locale stuff into a separate file.\n> \n> * Same with Kerberos\n> \n> * Move the release notes at the end to CHANGELOG.\n> \n> That should make the file somewhat smaller, then also it is really to\n> verbose at times and is formatted really oddly, at least on my system.\n> \n> Okay, now I really went out of my way and redid the whole thing. You'll\n> find the result attached. This is merely an idea of what I would consider\n> simpler. I removed some inconsistencies, things that were unnecessary, too\n> complicated, etc. Okay, I removed a lot of stuff, but most of the stuff\n> people can really figure out themselves if they need them in the first\n> place. And I shrunk the thing to 25%.\n> \n> Perhaps there should be a separate Install FAQ of the sort \"My shell said\n> 'export: Command not found.'\" or \"What's a shared library?\" to move the\n> really annoying stuff out of people's ways.\n> \n> Comments?\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\nContent-Description: New INSTALL file?\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 00:44:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "\nI hope we can get everyone with ideas together before 7.0 is released.\n\n\n> On Sep 20, Bruce Momjian mentioned:\n> \n> > I agree our INSTALL is very large. Is there some way we can simplify\n> > the install process?\n> \n> On the one hand a database server is probably not your everyday\n> ./configure && make && make install program you get from freshmeat and you\n> do want to put some time into a proper installation. On the other hand the\n> INSTALL file is really way too long and makes for unpleasant reading.\n> \n> Here are a couple of ideas.\n> \n> * Chapter 2 \"Ports\" should be moved to the README file (has nothing to do\n> with the actual installation).\n> \n> * Move the gory details of item 5 (flex) to a separate file (README.flex).\n> \n> * Move the locale stuff into a separate file.\n> \n> * Same with Kerberos\n> \n> * Move the release notes at the end to CHANGELOG.\n> \n> That should make the file somewhat smaller, then also it is really to\n> verbose at times and is formatted really oddly, at least on my system.\n> \n> Okay, now I really went out of my way and redid the whole thing. You'll\n> find the result attached. This is merely an idea of what I would consider\n> simpler. I removed some inconsistencies, things that were unnecessary, too\n> complicated, etc. Okay, I removed a lot of stuff, but most of the stuff\n> people can really figure out themselves if they need them in the first\n> place. And I shrunk the thing to 25%.\n> \n> Perhaps there should be a separate Install FAQ of the sort \"My shell said\n> 'export: Command not found.'\" or \"What's a shared library?\" to move the\n> really annoying stuff out of people's ways.\n> \n> Comments?\n> \n> -- \n> Peter Eisentraut - [email protected]\n> http://yi.org/peter-e\nContent-Description: New INSTALL file?\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 18:16:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "\nOn 29-Nov-99 Bruce Momjian wrote:\n> \n> I hope we can get everyone with ideas together before 7.0 is released.\n\nIt can be greatly simplified but will require changes to configure which\nI don't think I can do. I haven't really gotten to know autoconf yet and\nI think Peter said he's tied up till after the first of the year but he\nstill wants to dig into it. Is after the first too late?\n\nVince.\n\n\n> \n> \n>> On Sep 20, Bruce Momjian mentioned:\n>> \n>> > I agree our INSTALL is very large. Is there some way we can simplify\n>> > the install process?\n>> \n>> On the one hand a database server is probably not your everyday\n>> ./configure && make && make install program you get from freshmeat and you\n>> do want to put some time into a proper installation. On the other hand the\n>> INSTALL file is really way too long and makes for unpleasant reading.\n>> \n>> Here are a couple of ideas.\n>> \n>> * Chapter 2 \"Ports\" should be moved to the README file (has nothing to do\n>> with the actual installation).\n>> \n>> * Move the gory details of item 5 (flex) to a separate file (README.flex).\n>> \n>> * Move the locale stuff into a separate file.\n>> \n>> * Same with Kerberos\n>> \n>> * Move the release notes at the end to CHANGELOG.\n>> \n>> That should make the file somewhat smaller, then also it is really to\n>> verbose at times and is formatted really oddly, at least on my system.\n>> \n>> Okay, now I really went out of my way and redid the whole thing. You'll\n>> find the result attached. This is merely an idea of what I would consider\n>> simpler. I removed some inconsistencies, things that were unnecessary, too\n>> complicated, etc. Okay, I removed a lot of stuff, but most of the stuff\n>> people can really figure out themselves if they need them in the first\n>> place. And I shrunk the thing to 25%.\n>> \n>> Perhaps there should be a separate Install FAQ of the sort \"My shell said\n>> 'export: Command not found.'\" or \"What's a shared library?\" to move the\n>> really annoying stuff out of people's ways.\n>> \n>> Comments?\n>> \n>> -- \n>> Peter Eisentraut - [email protected]\n>> http://yi.org/peter-e\n> Content-Description: New INSTALL file?\n> \n> [Attachment, skipping...]\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> Have you seen http://www.pop4.net?\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Mon, 29 Nov 1999 18:28:46 -0500 (EST)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
},
{
"msg_contents": "> \n> On 29-Nov-99 Bruce Momjian wrote:\n> > \n> > I hope we can get everyone with ideas together before 7.0 is released.\n> \n> It can be greatly simplified but will require changes to configure which\n> I don't think I can do. I haven't really gotten to know autoconf yet and\n> I think Peter said he's tied up till after the first of the year but he\n> still wants to dig into it. Is after the first too late?\n\nNo, I don't think so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Nov 1999 18:38:41 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSTALL file (was Re: [HACKERS] Re: HISTORY for 6.5.2)"
}
] |
[
{
"msg_contents": "Hey hackers:\nI'vebeen using pg_dump in a typical three db setup: development,\nstaging, and live. The output of pg_dump is ordered by oid, so as the\ndb's histories diverge, the output does as well. That is, if identical\ntables get created in the development and staging dbs, for example, but\nin a different order, I can't us diff to test this. I was wondering if\nthere is any reason why the order couldn't be by tablename, instead of\noid, since the ordering of creation of sequences and types and such is\ntaken care of. \n\nAh I think I just figured it out: it's that pesky object\nsupport, isn't it? In order to use a table (class) as a member (field)\nof another table, it has to exist first, and the only thing in the\nsystem table that ensures that is oid. Bummer. Hmm, it'd still be useful\nfor comparision purposes, but it wouldn't gaurantee correct SQL scripts.\nPerhaps I'll just hack my local copy with an extra switch for \"class\nname order output\". Anyone else want it?\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Sun, 12 Sep 1999 20:18:56 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump table order"
},
{
"msg_contents": "On Sun, Sep 12, 1999 at 08:18:56PM -0500, Ross J. Reedstrom wrote:\n> \n> Ah I think I just figured it out: it's that pesky object\n> support, isn't it? In order to use a table (class) as a member (field)\n> of another table, it has to exist first, and the only thing in the\n> system table that ensures that is oid. Bummer. Hmm, it'd still be useful\n\nTalking to myself: \"Gee Ross, why don't you read the friendly comments\nin the source you just found the ordey by oid in, explaining exactly\nthis point?\"\n\nRoss \"the blind\"\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Sun, 12 Sep 1999 20:35:27 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump table order"
},
{
"msg_contents": "> Hey hackers:\n> I'vebeen using pg_dump in a typical three db setup: development,\n> staging, and live. The output of pg_dump is ordered by oid, so as the\n> db's histories diverge, the output does as well. That is, if identical\n> tables get created in the development and staging dbs, for example, but\n> in a different order, I can't us diff to test this. I was wondering if\n> there is any reason why the order couldn't be by tablename, instead of\n> oid, since the ordering of creation of sequences and types and such is\n> taken care of. \n> \n> Ah I think I just figured it out: it's that pesky object\n> support, isn't it? In order to use a table (class) as a member (field)\n> of another table, it has to exist first, and the only thing in the\n> system table that ensures that is oid. Bummer. Hmm, it'd still be useful\n> for comparision purposes, but it wouldn't gaurantee correct SQL scripts.\n> Perhaps I'll just hack my local copy with an extra switch for \"class\n> name order output\". Anyone else want it?\n> \n\nI thought someone already did that. It may be in 6.5.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Sep 1999 22:05:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump table order"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Ah I think I just figured it out: it's that pesky object\n> support, isn't it? In order to use a table (class) as a member (field)\n> of another table, it has to exist first, and the only thing in the\n> system table that ensures that is oid. Bummer. Hmm, it'd still be useful\n> for comparision purposes, but it wouldn't gaurantee correct SQL scripts.\n> Perhaps I'll just hack my local copy with an extra switch for \"class\n> name order output\". Anyone else want it?\n\nBetter idea: make pg_dump smarter, so that it sorts the tables by name\nas far as possible without breaking inheritance and membership\ndependencies. It already retrieves the inheritance graph, and it could\ncertainly figure column-type dependencies too. I don't think anyone\nwould object to producing the output in a more meaningful order, so\nI see no need for a switch if you can make this work.\n\nI used to know enough about topological sorts to sketch how this ought\nto work, but that was years ago :-(. I do see that the simplest\napproach to a sort comparison function, \"if a depends on b then say a>b,\nelse say result of comparing name(a) and name(b)\", will not work because\nit's not transitive.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 1999 10:41:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump table order "
},
{
"msg_contents": "> Better idea: make pg_dump smarter, so that it sorts the tables by name\n> as far as possible without breaking inheritance and membership\n> dependencies. It already retrieves the inheritance graph, and it could\n> certainly figure column-type dependencies too. I don't think anyone\n> would object to producing the output in a more meaningful order, so\n> I see no need for a switch if you can make this work.\n> \n> I used to know enough about topological sorts to sketch how this ought\n> to work, but that was years ago :-(. I do see that the simplest\n> approach to a sort comparison function, \"if a depends on b then say a>b,\n> else say result of comparing name(a) and name(b)\", will not work because\n> it's not transitive.\n\nI now someone fixed some of that recently, and I thought it was in 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Sep 1999 11:35:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump table order"
},
{
"msg_contents": "\nOn 13-Sep-99 Bruce Momjian wrote:\n>> Better idea: make pg_dump smarter, so that it sorts the tables by name\n>> as far as possible without breaking inheritance and membership\n>> dependencies. It already retrieves the inheritance graph, and it could\n>> certainly figure column-type dependencies too. I don't think anyone\n>> would object to producing the output in a more meaningful order, so\n>> I see no need for a switch if you can make this work.\n>> \n>> I used to know enough about topological sorts to sketch how this ought\n>> to work, but that was years ago :-(. I do see that the simplest\n>> approach to a sort comparison function, \"if a depends on b then say a>b,\n>> else say result of comparing name(a) and name(b)\", will not work because\n>> it's not transitive.\n> \n> I now someone fixed some of that recently, and I thought it was in 6.5.\n\nUnfortunately not, if I use some functions in CONSTRANE clause of\nCREATE TABLE, I can't restore from backup made by pg_dump.\nIt's nice idea always dump functions first. \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n",
"msg_date": "Mon, 13 Sep 1999 21:31:35 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump table order"
}
] |
[
{
"msg_contents": "\n\nCalling setBytes or any attempt to do LargeObject.write(byte[])\nreturns\n\n[on the java side]\nOid gotten from lom.create() is 641377\nFastPath call returned ERROR: lo_write: invalid large obj descriptor (0)\n\n at postgresql.fastpath.Fastpath.fastpath(Fastpath.java:141)\n at postgresql.fastpath.Fastpath.fastpath(Fastpath.java:188)\n at postgresql.largeobject.LargeObject.write(LargeObject.java:173)\n at RestoreBlobs.main(RestoreBlobs.java:298)\n\n\nThe -d3 log of the postmaster has\n\nnitPostgres\nStartTransactionCommand\nquery: set datestyle to 'ISO'\nProcessUtility: set datestyle to 'ISO'\nCommitTransactionCommand\nStartTransactionCommand\nquery: select proname, oid from pg_proc where proname = 'lo_open' or proname = 'lo_close' or proname = 'lo_creat' or proname = 'lo_unl\\\nink' or proname = 'lo_lseek' or proname = 'lo_tell' or proname = 'loread' or proname = 'lowrite'\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nERROR: lo_write: invalid large obj descriptor (0)\nAbortCurrentTransaction\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n",
"msg_date": "Sun, 12 Sep 1999 21:30:49 -0700",
"msg_from": "Jason Venner <[email protected]>",
"msg_from_op": true,
"msg_subject": "jdbc1 large objects and 651 -- does it work for anyone"
}
] |
[
{
"msg_contents": "This looks like you have either not got AutoCommit set to false, or are\ncalling commit() between calls. As of 6.5.x LargeObjects need to be\nwrapped within a transaction, but if I'm right, all open LO's are closed\non commit().\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Jason Venner [mailto:[email protected]]\nSent: 13 September 1999 05:31\nTo: [email protected]\nSubject: [HACKERS] jdbc1 large objects and 651 -- does it work for\nanyone\n\n\n\n\nCalling setBytes or any attempt to do LargeObject.write(byte[])\nreturns\n\n[on the java side]\nOid gotten from lom.create() is 641377\nFastPath call returned ERROR: lo_write: invalid large obj descriptor\n(0)\n\n at postgresql.fastpath.Fastpath.fastpath(Fastpath.java:141)\n at postgresql.fastpath.Fastpath.fastpath(Fastpath.java:188)\n at\npostgresql.largeobject.LargeObject.write(LargeObject.java:173)\n at RestoreBlobs.main(RestoreBlobs.java:298)\n\n\nThe -d3 log of the postmaster has\n\nnitPostgres\nStartTransactionCommand\nquery: set datestyle to 'ISO'\nProcessUtility: set datestyle to 'ISO'\nCommitTransactionCommand\nStartTransactionCommand\nquery: select proname, oid from pg_proc where proname = 'lo_open' or\nproname = 'lo_close' or proname = 'lo_creat' or proname = 'lo_unl\\\nink' or proname = 'lo_lseek' or proname = 'lo_tell' or proname =\n'loread' or proname = 'lowrite'\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nERROR: lo_write: invalid large obj descriptor (0)\nAbortCurrentTransaction\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\npq_recvbuf: unexpected EOF on client connection\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n\n************\n",
"msg_date": "Mon, 13 Sep 1999 07:39:01 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] jdbc1 large objects and 651 -- does it work for any one"
},
{
"msg_contents": "\nOkay, I turned autocommit off (used to have to be autocommit on in\n6.3.2) Note: I am running --enable-cassert At the current time there\nare 2 open connections to the database, one essentially idle, the\nother doing the image work.\n\nI store about 3200 LO's, and on commit get the following\n\nfrom the java\n\nabout to update on 650697 backup/19990911-175649/6654 4\nabout to update on 650698 backup/19990911-175649/6655 3\nThe backend has broken the connection. Possibly the action you have attempted has caused it to close.\n\n at postgresql.PG_Stream.ReceiveChar(PG_Stream.java:173)[1] Done bin/postmaster -N 128 -d 3 -i >& /tmp/pgl\\\nog\n\n at postgresql.Connection.ExecSQL(Connection.java:309)\n at postgresql.jdbc1.Connection.commit(Connection.java:173)\n at RestoreBlobs.restoreWithLocks(RestoreBlobs.java:72)\n at RestoreBlobs.main(RestoreBlobs.java:301)\n./db_restore_script: unable to restore all of the images, failed on 1\n\n\n[note the postmaster bailed]\n\nStartTransactionCommand\nquery: update invoice set invoice = 712005 where seq = 98\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nquery: update invoice set invoice = 712020 where seq = 99\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nCommitTransactionCommand\nStartTransactionCommand\nquery: update invoice set invoice = 712035 where seq = 103\nProcessQuery\nCommitTransactionCommand\nStartTransactionCommand\nquery: commit\nProcessUtility: commit\nCommitTransactionCommand\nNOTICE: LockReleaseAll: xid loop detected, giving up\nStartTransactionCommand\nquery: begin\nProcessUtility: begin\nCommitTransactionCommand\nStartTransactionCommand\nquery: commit\nProcessUtility: commit\nNOTICE: SIReadEntryData: cache state reset\nTRAP: Failed Assertion(\"!(RelationNameCache->hctl->nkeys == 10):\", File: \"relcache.c\", Line: 1458)\n\n!(RelationNameCache->hctl->nkeys == 10) (0) [No such file or directory]\nbin/postmaster: reaping dead processes...\nbin/postmaster: CleanupProc: pid 9768 exited with status 134\nbin/postmaster: CleanupProc: sending SIGUSR1 to process 9759\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nbin/postmaster: CleanupProc: sending SIGUSR1 to process 9756\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nbin/postmaster: CleanupProc: reinitializing shared memory and semaphores\nshmem_exit(0) [#0]\nbinding ShmemCreate(key=52df3d, size=10292224)\nIpcMemoryCreate: shmget failed (Cannot allocate memory) key=5431101, size=10292224, permission=600\nFATAL 1: ShmemCreate: cannot create region\nproc_exit(0) [#0]\nshmem_exit(0) [#0]\nexit(0)\n\n",
"msg_date": "Mon, 13 Sep 1999 13:03:05 -0700",
"msg_from": "Jason Venner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] jdbc1 large objects and 651 -- does it work for any one"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.