threads
listlengths
1
2.99k
[ { "msg_contents": ">On Sat, 17 Apr 1999, Tom Lane wrote:\n>\n>> \n>> How about this: do something like this\n>> \t#ifdef CYGWIN32\n>> \t#define time_zone _timezone\n>> \t#else\n>> \t#define time_zone timezone\n>> \t#endif\n>> in the header, and then change the field names to time_zone in the code?\n>\n>Erk, I think I like that one less then the other option...at least the\n>other option the #ifdef for _timezone is pseudo-documented...defining and\n>usin ga time_zone, IMHO, would confuse the code more... \n>\n>In light of the fact that CYGWIN is defining a timezone, but that it isn't\n>useable, I think that the best solution presented so far is the first\n>one...just #ifdef the relevant sections of code...\n\nI have committed changes originally suggested by Yutaka Tanida\n<[email protected]> for win32 support.\n(I have modified the patch for dt.c a little bit due to the changes\nmade after 6.5b1 released).\n--\nTatsuo Ishii\n\n>--- src/backend/utils/adt/nabstime.c.orig\tSun Feb 21 12:49:32 1999\n>+++ src/backend/utils/adt/nabstime.c\tFri Apr 16 14:34:12 1999\n>@@ -77,7 +77,12 @@\n> \t\ttm = localtime(&now);\n> \n> \t\tCDayLight = tm->tm_isdst;\n>-\t\tCTimeZone = (tm->tm_isdst ? (timezone - 3600) : timezone);\n>+\t\tCTimeZone =\n>+#ifdef __CYGWIN32__\n>+\t\t(tm->tm_isdst ? (_timezone - 3600) : _timezone);\n>+#else\n>+\t\t(tm->tm_isdst ? (timezone - 3600) : timezone);\n>+#endif \n> \t\tstrcpy(CTZName, tzname[tm->tm_isdst]);\n> #else\n> #error USE_POSIX_TIME defined but no time zone available\n>@@ -167,7 +172,11 @@\n> \t\tstrcpy(tzn, tm->tm_zone);\n> #elif defined(HAVE_INT_TIMEZONE)\n> \tif (tzp != NULL)\n>+#ifdef __CYGWIN__\n>+\t\t*tzp = (tm->tm_isdst ? (_timezone - 3600) : _timezone);\n>+#else\n> \t\t*tzp = (tm->tm_isdst ? (timezone - 3600) : timezone);\n>+#endif\n> \tif (tzn != NULL)\n> \t\tstrcpy(tzn, tzname[tm->tm_isdst]);\n> #else\n>--- src/backend/utils/adt/dt.c.orig\tSat Mar 20 11:31:45 1999\n>+++ src/backend/utils/adt/dt.c\tFri Apr 16 14:35:56 1999\n>@@ -1476,7 +1476,13 @@\n> #if defined(HAVE_TM_ZONE)\n> \t\t\t\ttz = -(tm->tm_gmtoff);\t/* tm_gmtoff is Sun/DEC-ism */\n> #elif defined(HAVE_INT_TIMEZONE)\n>-\t\t\t\ttz = ((tm->tm_isdst > 0) ? (timezone - 3600) : timezone);\n>+\n>+#ifdef __CYGWIN__\n>+\t\t\t\ttz = (tm->tm_isdst ? (_timezone - 3600) : _timezone);\n>+#else\n>+\t\t\t\ttz = (tm->tm_isdst ? (timezone - 3600) : timezone);\n>+#endif\n>+\n> #else\n> #error USE_POSIX_TIME is defined but neither HAVE_TM_ZONE or HAVE_INT_TIMEZONE are defined\n> #endif\n>@@ -2474,7 +2480,11 @@\n> \t\t\tif (tzn != NULL)\n> \t\t\t\t*tzn = (char *)tm->tm_zone;\n> #elif defined(HAVE_INT_TIMEZONE)\n>+#ifdef __CYGWIN__\n>+\t\t\t*tzp = (tm->tm_isdst ? (_timezone - 3600) : _timezone);\n>+#else\n> \t\t\t*tzp = (tm->tm_isdst ? (timezone - 3600) : timezone);\n>+#endif\n> \t\t\tif (tzn != NULL)\n> \t\t\t\t*tzn = tzname[(tm->tm_isdst > 0)];\n> #else\n>@@ -3091,7 +3101,11 @@\n> #if defined(HAVE_TM_ZONE)\n> \t\t\t*tzp = -(tm->tm_gmtoff);\t/* tm_gmtoff is Sun/DEC-ism */\n> #elif defined(HAVE_INT_TIMEZONE)\n>+#ifdef __CYGWIN__\n>+\t\t\t*tzp = ((tm->tm_isdst > 0) ? (_timezone - 3600) : _timezone);\n>+#else\n> \t\t\t*tzp = ((tm->tm_isdst > 0) ? (timezone - 3600) : timezone);\n>+#endif\n> #else\n> #error USE_POSIX_TIME is defined but neither HAVE_TM_ZONE or HAVE_INT_TIMEZONE are defined\n> #endif\n>--- src/backend/utils/adt/datetime.c.orig\tMon Mar 15 01:40:15 1999\n>+++ src/backend/utils/adt/datetime.c\tFri Apr 16 14:30:17 1999\n>@@ -383,7 +383,11 @@\n> \t\tif (tzn != NULL)\n> \t\t\t*tzn = (char *)tm->tm_zone;\n> #elif defined(HAVE_INT_TIMEZONE)\n>+#ifdef __CYGWIN__\n>+\t\t*tzp = (tm->tm_isdst ? (_timezone - 3600) : _timezone);\n>+#else\n> \t\t*tzp = (tm->tm_isdst ? (timezone - 3600) : timezone);\n>+#endif\n> \t\tif (tzn != NULL)\n> \t\t\t*tzn = tzname[(tm->tm_isdst > 0)];\n> #else\n>--- src/include/port/win.h.orig\tMon Jan 18 21:43:50 1999\n>+++ src/include/port/win.h\tSat Apr 17 10:45:24 1999\n>@@ -5,3 +5,7 @@\n> #ifndef O_DIROPEN\n> #define O_DIROPEN\t0x100000\t/* should be in sys/fcntl.h */\n> #endif\n>+\n>+#define tzname _tzname /* should be in time.h?*/\n>+#define USE_POSIX_TIME\n>+#define HAVE_INT_TIMEZONE /* has int _timezone */\n>\n>\n", "msg_date": "Mon, 26 Apr 1999 13:47:34 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Cygwin32 fix for current " } ]
[ { "msg_contents": "Oleg Broytmann wrote:\n\n>\n> Hello!\n>\n> VIEW on 6.4.2 ignores DISTINCT. Is it a bug? known?\n>\n\n It's a known missing feature (not a bug - more like a design\n fault).\n\n DISTINCT is implemented as a unique sort step taken over the\n final result of a query. Views are implemented via the query\n rewrite rule system. If now someone would define a DISTINCT\n view and selects a join of it with another table, the rewrite\n system cannot tell the planner that only the scan's resulting\n from the view should be sorted unique. It could only tell\n that the entire result should be DISTINCT - what's wrong - so\n I left it out.\n\n I'm planning to implement some kind of subquery rangetable\n entries someday. At that time, all these problems (DISTINCT,\n GROUP BY, ORDER BY) of views will disappear.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 26 Apr 1999 08:57:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] create view as select distinct (fwd)" }, { "msg_contents": "\n\nI assume this has not been fixed?\n\n\n\n> Hello!\n> \n> VIEW on 6.4.2 ignores DISTINCT. Is it a bug? known?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 23 Apr 1999 13:33:00 +0400 (MSD)\n> From: Artem Chuprina <[email protected]>\n> To: Oleg Broytmann <[email protected]>\n> Subject: create view as select distinct\n> \n> pirit=> select distinct value_at from www_counter_store;\n> value_at\n> ----------\n> 04-22-1999\n> (1 row)\n> \n> pirit=> create view www_counter_store_dates as select distinct value_at from www_counter_store;\n> CREATE\n> pirit=> select * from www_counter_store_dates;\n> ----------\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> 04-22-1999\n> (15 rows)\n> \n> -- \n> Artem Chuprina E-mail: [email protected]\n> Network Administrator FIDO: 2:5020/371.32\n> PIRIT Corp. Phone: +7(095) 115-7101\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:19:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create view as select distinct (fwd)" }, { "msg_contents": "\nPerhaps we should issue an error message if this is attempted until we\nget it working?\n\n\n> Oleg Broytmann wrote:\n> \n> >\n> > Hello!\n> >\n> > VIEW on 6.4.2 ignores DISTINCT. Is it a bug? known?\n> >\n> \n> It's a known missing feature (not a bug - more like a design\n> fault).\n> \n> DISTINCT is implemented as a unique sort step taken over the\n> final result of a query. Views are implemented via the query\n> rewrite rule system. If now someone would define a DISTINCT\n> view and selects a join of it with another table, the rewrite\n> system cannot tell the planner that only the scan's resulting\n> from the view should be sorted unique. It could only tell\n> that the entire result should be DISTINCT - what's wrong - so\n> I left it out.\n> \n> I'm planning to implement some kind of subquery rangetable\n> entries someday. At that time, all these problems (DISTINCT,\n> GROUP BY, ORDER BY) of views will disappear.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:26:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create view as select distinct (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n>\n>\n> I assume this has not been fixed?\n>\n>\n>\n> > Hello!\n> >\n> > VIEW on 6.4.2 ignores DISTINCT. Is it a bug? known?\n> >\n> > Oleg.\n\n Right. This requires subselecting RTE's - one of the bigger\n TODO's after v6.5.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 18:39:08 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] create view as select distinct (fwd)" } ]
[ { "msg_contents": "\n> > That's a tough one. Why are elog(NOTICE) being sent? Is there a way to\n> > buffer those instead?\n> \n> I thought about that, but gave it up when I realized that it doesn't\n> offer a solution to the elog(ERROR) case. The only way not to choke\n> for elog(ERROR) is not to start sending the data message until you've\n> constructed it completely --- or to have a way of aborting the partially\n> sent message, which is feasible for COPY OUT but not really practical\n> for SELECT data messages.\n> \nI think a NOTICE, could be handeled differently than ERROR, since by\ndefinition a NOTICE won't \"disturb\" the current transaction, while an\nERROR will do an automatic rollback. So I think the ERROR case should \nstop transmission to the client immediately and then send the ERROR.\nThis happens with other DBMS's e.g when a lock timeout has occurred,\nor the good old \"Snapshot too old\" happens. (unload files will be half\nfinished). \nThe NOTICE could probably be buffered until the end of current data.\n\nAndreas\n\n", "msg_date": "Mon, 26 Apr 1999 09:00:46 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: light dawns: serious bug in FE/BE protocol hand\n\tling" } ]
[ { "msg_contents": "Hi All!\n\nDoes someone know about any CASE\ntools for Linux for E-R data modeling?\n\nGreetings\n\nManieq\[email protected]\n\n\n\n\n\n\n\n\nHi All!\n \nDoes someone know about any CASE\ntools for Linux for E-R data \nmodeling?\n \nGreetings\n \nManieq\[email protected]", "msg_date": "Mon, 26 Apr 1999 09:01:17 +0200", "msg_from": "\"=?iso-8859-2?Q?Mariusz_Czu=B3ada?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "CASE tools? (slightly off-topic)" }, { "msg_contents": "Hi,\n\nYou can ask [email protected] about his Vinsent - E-R data\nmodelling. It has support for Informix and Postgres, it's free\nand requires python, tcl/tk to run. I don't have right now an URL\nfor Vinsent but if you interested I'll email you.\nI didn't tried Vinsent yet, installation requires tricking with various\ncomponents (tcl/tk, python etc). If anyone could port Vinsent to new\nversion of python, tcl/tk it would be great ! As far as I know Vinsent\nis the only Case tool available for free. Denis discontinues this project\nbut source code are available. I'm not python+tcl/tk expert.\n\n\n\tRegards,\n\n\t\tOleg\n\nOn Mon, 26 Apr 1999, [iso-8859-2] Mariusz CzuО©╫ada wrote:\n\n> Date: Mon, 26 Apr 1999 09:01:17 +0200\n> From: \"[iso-8859-2] Mariusz CzuО©╫ada\" <[email protected]>\n> To: PGSQL - interfaces <[email protected]>\n> Subject: [INTERFACES] CASE tools? (slightly off-topic)\n> \n> Hi All!\n> \n> Does someone know about any CASE\n> tools for Linux for E-R data modeling?\n> \n> Greetings\n> \n> Manieq\n> [email protected]\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Mon, 26 Apr 1999 11:40:19 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] CASE tools? (slightly off-topic)" }, { "msg_contents": "Just found URL fro Vinsent - \nhttp://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\nDocumentation is in Russian, but you can see some screenshots\nhttp://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/docs/\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Mon, 26 Apr 1999, Oleg Bartunov wrote:\n\n> Date: Mon, 26 Apr 1999 11:40:19 +0400 (MSD)\n> From: Oleg Bartunov <[email protected]>\n> To: \"[iso-8859-2] Mariusz CzuО©╫ada\" <[email protected]>\n> Cc: PGSQL - interfaces <[email protected]>,\n> [email protected]\n> Subject: Re: [INTERFACES] CASE tools? (slightly off-topic)\n> \n> Hi,\n> \n> You can ask [email protected] about his Vinsent - E-R data\n> modelling. It has support for Informix and Postgres, it's free\n> and requires python, tcl/tk to run. I don't have right now an URL\n> for Vinsent but if you interested I'll email you.\n> I didn't tried Vinsent yet, installation requires tricking with various\n> components (tcl/tk, python etc). If anyone could port Vinsent to new\n> version of python, tcl/tk it would be great ! As far as I know Vinsent\n> is the only Case tool available for free. Denis discontinues this project\n> but source code are available. I'm not python+tcl/tk expert.\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> On Mon, 26 Apr 1999, [iso-8859-2] Mariusz CzuО©╫ada wrote:\n> \n> > Date: Mon, 26 Apr 1999 09:01:17 +0200\n> > From: \"[iso-8859-2] Mariusz CzuО©╫ada\" <[email protected]>\n> > To: PGSQL - interfaces <[email protected]>\n> > Subject: [INTERFACES] CASE tools? (slightly off-topic)\n> > \n> > Hi All!\n> > \n> > Does someone know about any CASE\n> > tools for Linux for E-R data modeling?\n> > \n> > Greetings\n> > \n> > Manieq\n> > [email protected]\n> > \n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Apr 1999 11:45:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] CASE tools? (slightly off-topic)" }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> Just found URL fro Vinsent -\n> http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/\n> Documentation is in Russian, but you can see some screenshots\n> http://www.ccas.ru/~gurov/ftp/Editors/CASE/Vinsent/docs/\n\n>\n> > On Mon, 26 Apr 1999, [iso-8859-2] Mariusz Czu�ada wrote:\n> > > Does someone know about any CASE\n> > > tools for Linux for E-R data modeling?\n\nHave a look at this:\n\nhttp://www.ics.uci.edu/pub/arch/uml/\n", "msg_date": "Mon, 26 Apr 1999 19:00:23 +0000", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] CASE tools? (slightly off-topic)" } ]
[ { "msg_contents": "Hi all,\n\n--I'm trying numeric and decimal types and I have a couple of\nquestions...\n\n\nCREATE TABLE Test (num NUMERIC(7,2), dec DECIMAL(7,2), flt8 FLOAT(15));\nCREATE\nINSERT INTO Test VALUES (1,1,1);\nINSERT 191083 1\nINSERT INTO Test VALUES (2.343,2.343,2.343);\nINSERT 191084 1\nINSERT INTO Test VALUES (-3.0,-3.0,-3.0);\nINSERT 191085 1\nselect * from test;\n num| dec| flt8\n-----+-----+-----\n 1.00| 1| 1\n 2.34|2.343|2.343\n-3.00| -3| -3\n(3 rows)\n\n--decimal has the same format of float instead of numeric.\n\n--what's the difference between decimal and numeric?\n--psql show both of them as numeric:\nprova=> \\d test\nTable = test\n+----------------------------------+----------------------------------+-------+\n\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n\n| num | numeric\n| var |\n| dec | numeric\n| var |\n| flt8 | float8\n| 8 |\n+----------------------------------+----------------------------------+-------+\n\nSELECT flt8,CAST (flt8 AS numeric(5,3)), CAST (flt8 AS decimal(5,3))\nFROM Test;\n flt8|numeric|numeric\n-----+-------+-------\n 1| 1| 1\n2.343| 2.343| 2.343\n -3| -3| -3\n(3 rows)\n\n--Seems that CAST translates float to numeric even if I specify decimal.\n\n-- in reality the label says numeric but data has the decimal format\ninstead of numeric.\n\n--numeric and decimal doesn't support arithmetic operations with\nfloats...\n\nSELECT num-flt8, dec-flt8 FROM Test;\nERROR: Unable to identify an operator '-' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num+flt8, dec+flt8 FROM Test;\nERROR: Unable to identify an operator '+' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num*flt8, dec*flt8 FROM Test;\nERROR: Unable to identify an operator '*' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT num/flt8, dec/flt8 FROM Test;\nERROR: Unable to identify an operator '/' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\nSELECT * FROM Test WHERE dec < flt8;\nERROR: Unable to identify an operator '<' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\n\n--I create this function:\ncreate function dec_float8_lt(decimal,float8) returns bool as '\ndeclare\n f1 float8;\n f2 float8;\nbegin\n f1:= $1;\n f2:= $2;\n return (f1 < f2);\nend;\n' language 'plpgsql';\nCREATE\n\n--and I tried to create this operator.. but CREATE OPERATOR doesn't\nrecognize decimal/numeric keyword...\n\ncreate operator < (\n leftarg=decimal,\n rightarg=float8,\n procedure=dec_float8_lt\n );\nERROR: parser: parse error at or near \"decimal\"\n\nSELECT * FROM Test WHERE dec < flt8;\nERROR: Unable to identify an operator '<' for types 'numeric' and\n'float8'\n You will have to retype this query using an explicit cast\n\nselect dec_float8_lt(1.23,12.2);\ndec_float8_lt\n-------------\nt\n(1 row)\n\nI sent a report about this topic some weeks ago but I had no response.\n\nJos�\n\n\n\nHi all,\n--I'm trying numeric and decimal types and I have a couple of questions...\n \nCREATE TABLE Test (num NUMERIC(7,2), dec DECIMAL(7,2), flt8 FLOAT(15));\nCREATE\nINSERT INTO Test VALUES (1,1,1);\nINSERT 191083 1\nINSERT INTO Test VALUES (2.343,2.343,2.343);\nINSERT 191084 1\nINSERT INTO Test VALUES (-3.0,-3.0,-3.0);\nINSERT 191085 1\nselect * from test;\n  num|  dec| flt8\n-----+-----+-----\n 1.00|    1|    1\n 2.34|2.343|2.343\n-3.00|   -3|   -3\n(3 rows)\n--decimal has the same format of float instead of numeric.\n--what's the difference between decimal and numeric?\n--psql show both of them as numeric:\nprova=> \\d test\nTable    = test\n+----------------------------------+----------------------------------+-------+\n|             \nField              \n|             \nType               \n| Length|\n+----------------------------------+----------------------------------+-------+\n| num                             \n| numeric                         \n|   var |\n| dec                             \n| numeric                         \n|   var |\n| flt8                            \n| float8                          \n|     8 |\n+----------------------------------+----------------------------------+-------+\nSELECT flt8,CAST (flt8 AS numeric(5,3)), CAST (flt8 AS decimal(5,3))\nFROM Test;\n flt8|numeric|numeric\n-----+-------+-------\n    1|      1|     \n1\n2.343|  2.343|  2.343\n   -3|     -3|    \n-3\n(3 rows)\n--Seems that CAST translates float to numeric even if I specify decimal.\n-- in reality the label says numeric but data has the decimal format\ninstead of numeric.\n--numeric and decimal doesn't support arithmetic operations with floats...\nSELECT num-flt8, dec-flt8 FROM Test;\nERROR:  Unable to identify an operator '-' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\nSELECT num+flt8, dec+flt8 FROM Test;\nERROR:  Unable to identify an operator '+' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\nSELECT num*flt8, dec*flt8 FROM Test;\nERROR:  Unable to identify an operator '*' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\nSELECT num/flt8, dec/flt8 FROM Test;\nERROR:  Unable to identify an operator '/' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\nSELECT * FROM Test WHERE dec < flt8;\nERROR:  Unable to identify an operator '<' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\n--I create this function:\ncreate function dec_float8_lt(decimal,float8) returns bool as '\ndeclare\n        f1 float8;\n        f2 float8;\nbegin\n        f1:= $1;\n        f2:= $2;\n        return (f1 < f2);\nend;\n' language 'plpgsql';\nCREATE\n--and I tried to create this operator.. but CREATE OPERATOR doesn't\nrecognize decimal/numeric keyword...\ncreate operator < (\n        leftarg=decimal,\n        rightarg=float8,\n        procedure=dec_float8_lt\n        );\nERROR:  parser: parse error at or near \"decimal\"\nSELECT * FROM Test WHERE dec < flt8;\nERROR:  Unable to identify an operator '<' for types 'numeric'\nand 'float8'\n        You will have to retype\nthis query using an explicit cast\nselect dec_float8_lt(1.23,12.2);\ndec_float8_lt\n-------------\nt\n(1 row)\nI sent a report about this topic some weeks ago but I had no response.\nJosé", "msg_date": "Mon, 26 Apr 1999 10:59:58 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "decimal and numeric types" } ]
[ { "msg_contents": "\nI figured ppl would figure it out on their own, but to help things along,\nI've removed the old system-specfic .out files altogether and added a\nsystem.sh script...\n\nthe script outputs the output that regress.sh uses, in my case\n'i386-freebsd', to use for the .out files that are system specific...\n\nPlease submit appropriate patches for .out files for the various\nsystems...will be doing Solaris over the next few days...\n\nOn Sat, 17 Apr 1999, Tom Lane wrote:\n\n> \"Patrick Welche\" <[email protected]> writes:\n> > [ system-specific regress 'expected' files not getting found on NetBSD ]\n> >\n> >> Didn't someone just change that stuff to depend on config.guess instead\n> >> of PORTNAME?\n> >\n> > I changed SYSTEM back to uname -s, so \"int2\" and \"int4\" both worked again.\n> \n> This indicates that whoever modified regress.sh to use config.guess\n> instead of uname did a pretty incomplete job, ie, didn't rename all\n> the expected files appropriately.\n> \n> I still think that is a good change to make, but *not* if it's going to\n> be done in a half-baked fashion. We have to either finish the job or\n> revert to what we were doing before.\n> \n> My guess is that we will need a new configuration file in the regress\n> stuff to map config.guess outputs into 'expected' file names. Otherwise\n> we'll end up with many duplicate copies of the same 'expected' file for\n> various Unix variants...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Apr 1999 10:39:17 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression output " } ]
[ { "msg_contents": "Psql displays twice tables, views, indices, sequences etc.\n\nprova=> \\d\nDatabase = prova\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | aggtest1 | table |\n | postgres | aggtest1 | table |\n | postgres | books | table |\n | postgres | books | table |\n | postgres | patrons | table |\n | postgres | patrons | table |\n | postgres | poll | table |\n | postgres | poll | table |\n | postgres | transactions | table |\n | postgres | transactions | table |\n | postgres | vbooks | view? |\n | postgres | vbooks | view? |\n +------------------+----------------------------------+----------+\n\nprova=> select version();\nversion\n-------------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n(1 row)\n\nJos�\n\n\n\nPsql displays twice tables, views, indices, sequences etc.\nprova=> \\d\nDatabase    = prova\n +------------------+----------------------------------+----------+\n |  Owner          \n|            \nRelation            \n|   Type   |\n +------------------+----------------------------------+----------+\n | postgres        \n| aggtest1                        \n| table    |\n | postgres        \n| aggtest1                        \n| table    |\n | postgres        \n| books                           \n| table    |\n | postgres        \n| books                           \n| table    |\n | postgres        \n| patrons                         \n| table    |\n | postgres        \n| patrons                         \n| table    |\n | postgres        \n| poll                            \n| table    |\n | postgres        \n| poll                            \n| table    |\n | postgres        \n| transactions                    \n| table    |\n | postgres        \n| transactions                    \n| table    |\n | postgres        \n| vbooks                          \n| view?    |\n | postgres        \n| vbooks                          \n| view?    |\n +------------------+----------------------------------+----------+\nprova=> select version();\nversion\n-------------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n(1 row)\nJosé", "msg_date": "Mon, 26 Apr 1999 15:40:43 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "psql bug ?" }, { "msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> Psql displays twice tables, views, indices, sequences etc.\n\nNot seeing that here... might be time for a rebuild and initdb?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Apr 1999 10:57:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql bug ? " }, { "msg_contents": "> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > Psql displays twice tables, views, indices, sequences etc.\n> \n> Not seeing that here... might be time for a rebuild and initdb?\n> \n\nOr perhaps there are two entries in pg_user/pg_shadow for the same name.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Apr 1999 11:18:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql bug ?" }, { "msg_contents": "\n\nBruce Momjian ha scritto:\n\n> > =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > > Psql displays twice tables, views, indices, sequences etc.\n> >\n> > Not seeing that here... might be time for a rebuild and initdb?\n> >\n>\n> Or perhaps there are two entries in pg_user/pg_shadow for the same name.\n>\n\nYes, this was the cause. I don't know how but I had two users with the same\nname.\nThank you.\nJos�\n\n\n", "msg_date": "Tue, 27 Apr 1999 09:19:20 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql bug ?" } ]
[ { "msg_contents": "On Sat, 17 Apr 1999, Chris Bitmead wrote:\n\n> \n> I'm not sure what you're getting at. Yep, you can include the oid field\n> if you rename it, but it would be nice if you could leave it alone.\n> \n> A typical scenario is that you create some table and start using it.\n> Then you find you need some derived field (like quantity*price AS total)\n> or something. So you may rename say product table to productold, and\n> create a product view that is SELECT *, quantity*price AS total from\n> productold.\n> \n> The problem then arises if your code uses oid, because a view can't have\n> a field called oid. I'm advocating that you be allowed to create views\n> that have a field called oid to avoid this problem.\n\nAs D'Arcy did ask...which oid would you want used? The one from table a,\nor from Table b? They are two distinctly different numbers...the VIEW\nitself doesn't have an OID assigned to its rows, only the physical tables\nthemselves...\n\n > > \"D'Arcy J.M. Cain\" wrote:\n> > \n> > Thus spake Chris Bitmead\n> > > It would be much better if you could have an oid column in a view if you\n> > > want. Like\n> > > CREATE VIEW productv AS SELECT oid, * FROM product;\n> > >\n> > > But that's not allowed. Any reason why?\n> > \n> > Because the oid is not included in the view. Consider the following.\n> > \n> > CREATE VIEW c AS SELECT a1, a2, b1, b2 FROM a, b WHERE a_key = b_key;\n> > \n> > So which oid do you want, the one from table a or the one from table b?\n> > You can, however, do this.\n> > \n> > CREATE VIEW c AS SELECT a.oid AS a_oid, b.oid AS b_oid, a1, a2, b1, b2\n> > FROM a, b WHERE a_key = b_key;\n> > \n> > --\n> > D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> > http://www.druid.net/darcy/ | and a sheep voting on\n> > +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> -- \n> Chris Bitmead\n> http://www.bigfoot.com/~chris.bitmead\n> mailto:[email protected]\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Apr 1999 11:38:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "Marc G. Fournier wrote:\n\n>\n> On Sat, 17 Apr 1999, Chris Bitmead wrote:\n>\n> >\n> > I'm not sure what you're getting at. Yep, you can include the oid field\n> > if you rename it, but it would be nice if you could leave it alone.\n> >\n> > A typical scenario is that you create some table and start using it.\n> > Then you find you need some derived field (like quantity*price AS total)\n> > or something. So you may rename say product table to productold, and\n> > create a product view that is SELECT *, quantity*price AS total from\n> > productold.\n> >\n> > The problem then arises if your code uses oid, because a view can't have\n> > a field called oid. I'm advocating that you be allowed to create views\n> > that have a field called oid to avoid this problem.\n>\n> As D'Arcy did ask...which oid would you want used? The one from table a,\n> or from Table b? They are two distinctly different numbers...the VIEW\n> itself doesn't have an OID assigned to its rows, only the physical tables\n> themselves...\n\n Not exactly, because in his example there is only one table\n used in the view. But I wonder what an OID from a view might\n be good for? Under normal conditions, the OID is only good to\n UPDATE/DELETE something that was first SELECTed and later\n qualified by the application. But this is BAD design,\n because any system attribute is DB specific and leads to\n application portability problems. In any case, the primary\n key should be used instead of a DB specific row identifier.\n So the need of OID tells IMHO some insufficient database\n layout.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 26 Apr 1999 16:54:11 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "On Mon, 26 Apr 1999, Jan Wieck wrote:\n\n> Marc G. Fournier wrote:\n> \n> >\n> > On Sat, 17 Apr 1999, Chris Bitmead wrote:\n> >\n> > >\n> > > I'm not sure what you're getting at. Yep, you can include the oid field\n> > > if you rename it, but it would be nice if you could leave it alone.\n> > >\n> > > A typical scenario is that you create some table and start using it.\n> > > Then you find you need some derived field (like quantity*price AS total)\n> > > or something. So you may rename say product table to productold, and\n> > > create a product view that is SELECT *, quantity*price AS total from\n> > > productold.\n> > >\n> > > The problem then arises if your code uses oid, because a view can't have\n> > > a field called oid. I'm advocating that you be allowed to create views\n> > > that have a field called oid to avoid this problem.\n> >\n> > As D'Arcy did ask...which oid would you want used? The one from table a,\n> > or from Table b? They are two distinctly different numbers...the VIEW\n> > itself doesn't have an OID assigned to its rows, only the physical tables\n> > themselves...\n> \n> Not exactly, because in his example there is only one table\n> used in the view. But I wonder what an OID from a view might\n\nWait, I thought his SELECT had a 'FROM a,b' clause in it...no? *raised\neyebrow* If not, I misread, apologies...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Apr 1999 13:39:44 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "The Hermit Hacker wrote:\n\n> As D'Arcy did ask...which oid would you want used? The one from table a,\n> or from Table b? \n\nJust like any situation where column names conflict, the answer is\n\"whichever one I say\".\n\nIf I have a join then I would say\nCREATE view productv as SELECT product.oid, product.name, mfr.name from\nproduct, mfr where product.mfr = mfr.oid;\n\nThis is no different from any other case where you join two tables with\nsame column names. Only difference is that it doesn't work :-(.\n\n\n>They are two distinctly different numbers...the VIEW\n> itself doesn't have an OID assigned to its rows,\n\nExactly, so why prevent the user having a column called \"oid\"?\n\n only the physical tables\n> themselves...\n> \n> > > \"D'Arcy J.M. Cain\" wrote:\n> > >\n> > > Thus spake Chris Bitmead\n> > > > It would be much better if you could have an oid column in a view if you\n> > > > want. Like\n> > > > CREATE VIEW productv AS SELECT oid, * FROM product;\n> > > >\n> > > > But that's not allowed. Any reason why?\n> > >\n> > > Because the oid is not included in the view. Consider the following.\n> > >\n> > > CREATE VIEW c AS SELECT a1, a2, b1, b2 FROM a, b WHERE a_key = b_key;\n> > >\n> > > So which oid do you want, the one from table a or the one from table b?\n> > > You can, however, do this.\n> > >\n> > > CREATE VIEW c AS SELECT a.oid AS a_oid, b.oid AS b_oid, a1, a2, b1, b2\n> > > FROM a, b WHERE a_key = b_key;\n> > >\n> > > --\n> > > D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> > > http://www.druid.net/darcy/ | and a sheep voting on\n> > > +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> >\n> > --\n> > Chris Bitmead\n> > http://www.bigfoot.com/~chris.bitmead\n> > mailto:[email protected]\n> >\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n", "msg_date": "Tue, 27 Apr 1999 10:07:41 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "Jan Wieck wrote:\n\n> Not exactly, because in his example there is only one table\n> used in the view. But I wonder what an OID from a view might\n> be good for? \n\nThe problem with postgres, unlike other object models, is that you can't\nadd methods to objects, except by creating a new \"object\" called a view.\n(Well I suppose you can write functions or something, but it's not\ninvisible to the user like a view).\n\nSo users start using base tables and their oids and doing SELECTs. Then\nsomeone realises they need a \"method\" (like quantity * price AS total or\nsomething), so they make a view, and they want to start using the view.\nBut they want to avoid changing references to \"oid\" to some new name in\nthe view.\n\n\n> Under normal conditions, the OID is only good to\n> UPDATE/DELETE something that was first SELECTed and later\n> qualified by the application. But this is BAD design,\n> because any system attribute is DB specific and leads to\n> application portability problems.\n\nA unique identifier for an object is NOT Db specific in the object\ndatabase ODMG world. I want to use Postgres like a bad Object database,\nnot like a good RDBMS.\n\nI'd like to put up a web page soon to list what needs to be done to\nPostgres in order for it to support the Object Database Management Group\n(ODMG) standard. The basic answer is \"not a lot\", but there are a few\nthings. One thing to understand is that for an object database, the oid\nis absolutely fundamental.\n\nAnyway, Postgres is portable, so by extension my app is portable if I\nuse it.\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n", "msg_date": "Tue, 27 Apr 1999 10:18:00 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "=================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n=================================================================\n\n\nYour name : Chris Bitmead\t\nYour email address : [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture : Intel x86\n\n Operating System : Linux 2.0.36\n\n PostgreSQL version : Latest Snapshot as at May 2, 1999\n\n Compiler used : gcc 2.7.2.3\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nCOALESCE sql function causes postgres to CRASH!\n\ne.g.\n\nSELECT story.title,story.image, mfr.image FROM story, mfr where\nstory.category= mfr.oid;\ntitle |image |image \n--------------+------------------+--------------------\nCanon |/icon/critique.jpg|/icon/canon.gif \nNikon | |/icon/nikon.gif \nOlympus | |/icon/olympus.gif \nNew Arca | |/icon/arca-swiss.gif\nNew Hasselblad| |/icon/hasselblad.gif\n(5 rows)\n\nhttpd=> SELECT story.title, COALESCE(story.image, mfr.image) FROM story,\nmfr where story.category= mfr.oid;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n", "msg_date": "Sun, 02 May 1999 13:35:08 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "> COALESCE sql function causes postgres to CRASH!\n> httpd=> SELECT story.title, COALESCE(story.image, mfr.image)\n> httpd-> FROM story, mfr where story.category= mfr.oid;\n\nThis is a known problem which I was hoping someone would pick up and\ntry to fix. Not sure I'll have time to look at it before v6.5 is\nreleased.\n\nThe problem is in combining columns from multiple tables in the\nCOALESCE result. There are commented-out examples in the regression\ntest which illustrate the \"feature\". Other features of COALESCE seem\nto work OK...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 03 May 1999 17:04:29 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> COALESCE sql function causes postgres to CRASH!\n>> httpd=> SELECT story.title, COALESCE(story.image, mfr.image)\n>> httpd-> FROM story, mfr where story.category= mfr.oid;\n\n> The problem is in combining columns from multiple tables in the\n> COALESCE result.\n\nI see at least part of the problem: flatten_tlistentry forgets to\nrecurse into the 'expr' part of a CaseWhen node. There may be some\nother contributing bugs in setrefs.c.\n\nThere are dozens of routines in the backend that know all about how to\nwalk a parse tree --- or, in some cases like this one, not quite all\nabout how to walk a parse tree :-(. I just spent some time yesterday\nteaching a couple of other routines about ArrayRef nodes, for example,\nand I've seen way too many other bugs of exactly this ilk.\n\nI think it'd be a good idea to try to centralize this knowledge so that\nthere are fewer places to change to add a new node type. For example,\na routine that wants to examine all the Var nodes in a tree should be\nable to look something like this:\n\n\tif (IsA(node, Var))\n\t{\n\t\tprocess var node;\n\t}\n\telse\n\t\tstandard_tree_walker(node, myself, ...);\n\nrather than having another copy of a bunch of error-prone boilerplate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 May 1999 12:12:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed... " }, { "msg_contents": "> I think it'd be a good idea to try to centralize this knowledge so that\n> there are fewer places to change to add a new node type. For example,\n> a routine that wants to examine all the Var nodes in a tree should be\n> able to look something like this:\n> \n> \tif (IsA(node, Var))\n> \t{\n> \t\tprocess var node;\n> \t}\n> \telse\n> \t\tstandard_tree_walker(node, myself, ...);\n> \n> rather than having another copy of a bunch of error-prone boilerplate.\n\nThat is an interesting idea. The current code clearly needs cleanup and\nis error-prone.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 May 1999 23:05:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "\nChris, any chance you can send a small reproducable test case for this,\nwith INSERT's and CREATE table. Thanks.\n\n\n\n> =================================================================\n> POSTGRESQL BUG REPORT TEMPLATE\n> =================================================================\n> \n> \n> Your name : Chris Bitmead\t\n> Your email address : [email protected]\n> \n> \n> System Configuration\n> ---------------------\n> Architecture : Intel x86\n> \n> Operating System : Linux 2.0.36\n> \n> PostgreSQL version : Latest Snapshot as at May 2, 1999\n> \n> Compiler used : gcc 2.7.2.3\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> COALESCE sql function causes postgres to CRASH!\n> \n> e.g.\n> \n> SELECT story.title,story.image, mfr.image FROM story, mfr where\n> story.category= mfr.oid;\n> title |image |image \n> --------------+------------------+--------------------\n> Canon |/icon/critique.jpg|/icon/canon.gif \n> Nikon | |/icon/nikon.gif \n> Olympus | |/icon/olympus.gif \n> New Arca | |/icon/arca-swiss.gif\n> New Hasselblad| |/icon/hasselblad.gif\n> (5 rows)\n> \n> httpd=> SELECT story.title, COALESCE(story.image, mfr.image) FROM story,\n> mfr where story.category= mfr.oid;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:52:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "> Chris, any chance you can send a small reproducable test case for this,\n> with INSERT's and CREATE table. Thanks.\n> > COALESCE sql function causes postgres to CRASH!\n> > e.g.\n> > httpd=> SELECT story.title, COALESCE(story.image, mfr.image) FROM story,\n> > mfr where story.category= mfr.oid;\n\nNot necessary. This was a known problem documented in the regression\ntests, and Tom Lane just fixed it a day or two ago. The problem was\nwith including more than one table in a COALESCE or CASE expression\nresult.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 10 May 1999 17:33:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Chris, any chance you can send a small reproducable test case for this,\n> with INSERT's and CREATE table. Thanks.\n\nSure. Here it is....\n\n\nhttpd=> create table aaa(a text);\nCREATE\nhttpd=> create table bbb(b text);\nCREATE\nhttpd=> select coalesce(a,b) from aaa,bbb;\ncase\n----\n(0 rows)\n\nhttpd=> insert into aaa values('aaa');\nINSERT 84818 1\nhttpd=> insert into bbb values('bbb');\nINSERT 84819 1\nhttpd=> select coalesce(a,b) from aaa,bbb;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\n\n\n \n> > =================================================================\n> > POSTGRESQL BUG REPORT TEMPLATE\n> > =================================================================\n> >\n> >\n> > Your name : Chris Bitmead\n> > Your email address : [email protected]\n> >\n> >\n> > System Configuration\n> > ---------------------\n> > Architecture : Intel x86\n> >\n> > Operating System : Linux 2.0.36\n> >\n> > PostgreSQL version : Latest Snapshot as at May 2, 1999\n> >\n> > Compiler used : gcc 2.7.2.3\n> >\n> >\n> > Please enter a FULL description of your problem:\n> > ------------------------------------------------\n> >\n> > COALESCE sql function causes postgres to CRASH!\n> >\n> > e.g.\n> >\n> > SELECT story.title,story.image, mfr.image FROM story, mfr where\n> > story.category= mfr.oid;\n> > title |image |image\n> > --------------+------------------+--------------------\n> > Canon |/icon/critique.jpg|/icon/canon.gif\n> > Nikon | |/icon/nikon.gif\n> > Olympus | |/icon/olympus.gif\n> > New Arca | |/icon/arca-swiss.gif\n> > New Hasselblad| |/icon/hasselblad.gif\n> > (5 rows)\n> >\n> > httpd=> SELECT story.title, COALESCE(story.image, mfr.image) FROM story,\n> > mfr where story.category= mfr.oid;\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > We have lost the connection to the backend, so further processing is\n> > impossible. Terminating.\n> >\n> >\n> \n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 23:54:16 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." }, { "msg_contents": "\nWorks now, thanks to Tom Lane:\n\n\n\ttest=> create table aaa(a text);\n\tCREATE\n\ttest=> create table bbb(b text);\n\tCREATE\n\ttest=> select coalesce(a,b) from aaa,bbb;\n\tcase\n\t----\n\t(0 rows)\n\t\n\ttest=> insert into aaa values('aaa');\n\tINSERT 19090 1\n\ttest=> insert into bbb values('bbb');\n\tINSERT 19091 1\n\ttest=> select coalesce(a,b) from aaa,bbb;\n\tcase\n\t----\n\taaa \n\t(1 row)\n\n\n\n> Bruce Momjian wrote:\n> > \n> > Chris, any chance you can send a small reproducable test case for this,\n> > with INSERT's and CREATE table. Thanks.\n> \n> Sure. Here it is....\n> \n> \n> httpd=> create table aaa(a text);\n> CREATE\n> httpd=> create table bbb(b text);\n> CREATE\n> httpd=> select coalesce(a,b) from aaa,bbb;\n> case\n> ----\n> (0 rows)\n> \n> httpd=> insert into aaa values('aaa');\n> INSERT 84818 1\n> httpd=> insert into bbb values('bbb');\n> INSERT 84819 1\n> httpd=> select coalesce(a,b) from aaa,bbb;\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> \n> \n> \n> \n> > > =================================================================\n> > > POSTGRESQL BUG REPORT TEMPLATE\n> > > =================================================================\n> > >\n> > >\n> > > Your name : Chris Bitmead\n> > > Your email address : [email protected]\n> > >\n> > >\n> > > System Configuration\n> > > ---------------------\n> > > Architecture : Intel x86\n> > >\n> > > Operating System : Linux 2.0.36\n> > >\n> > > PostgreSQL version : Latest Snapshot as at May 2, 1999\n> > >\n> > > Compiler used : gcc 2.7.2.3\n> > >\n> > >\n> > > Please enter a FULL description of your problem:\n> > > ------------------------------------------------\n> > >\n> > > COALESCE sql function causes postgres to CRASH!\n> > >\n> > > e.g.\n> > >\n> > > SELECT story.title,story.image, mfr.image FROM story, mfr where\n> > > story.category= mfr.oid;\n> > > title |image |image\n> > > --------------+------------------+--------------------\n> > > Canon |/icon/critique.jpg|/icon/canon.gif\n> > > Nikon | |/icon/nikon.gif\n> > > Olympus | |/icon/olympus.gif\n> > > New Arca | |/icon/arca-swiss.gif\n> > > New Hasselblad| |/icon/hasselblad.gif\n> > > (5 rows)\n> > >\n> > > httpd=> SELECT story.title, COALESCE(story.image, mfr.image) FROM story,\n> > > mfr where story.category= mfr.oid;\n> > > pqReadData() -- backend closed the channel unexpectedly.\n> > > This probably means the backend terminated abnormally\n> > > before or while processing the request.\n> > > We have lost the connection to the backend, so further processing is\n> > > impossible. Terminating.\n> > >\n> > >\n> > \n> > --\n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 20:09:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] It would be nice if this could be fixed..." } ]
[ { "msg_contents": "Taral wrote:\n> mico 2.2.6 has sufficient services support for CORBA to proceed. \n\nI know some people thinks mico is great but I have some concerns:\n* I can't even get it to compile on my workstation at work, upgraded\n both g++ to egcs and libg++/libstd++. Still it does not compile\n cleanly, and it is easy to track the problems in C++ (not my favoite)\n and strange #ifdefs. I think I have got some working version at home,\n will check.\n* BIG! \"make -k\" takes forever. \n* Compiling the output stubs of MICO require a machine with at least\n64MB RAM\n (hearsay). \n* requires C++ (and very specific versions too). This a new requirement\n for compiling PgSQL. (what state are the C-bindings in theses days?)\n\nI am afraid that all this is going to put and end to the happy\n\"fetch src and compile on any favorite old platform\" way of life\nin the PgSQL-community. \n\nAs long as these points are not resolved I think Tom L. will have to\ncontinue fixing the old FE/BE-protocol, since CORBA will have to be\n(very) optional on most platforms.\n\n>[...]\n> what I have now is not pretty). If anyone else wants to\n> work with me on this, it might get done sooner. Otherwise, my research\n> project takes precedence.\n\nWhy not put what you have in the CVS? The files in src/corba is pretty\nold.\nI have planned to get into this because did some coding around the \nFE/BE-protocol and would love to see a clean object-based communication\nin PgSQL instead the current one. Let us see what you have got and what\nis\nstill missing in more lightweight alternatives to MICO.\n \nbest regards,\nG�ran.\n", "msg_date": "Mon, 26 Apr 1999 17:52:02 +0200", "msg_from": "Goran Thyni <[email protected]>", "msg_from_op": true, "msg_subject": "CORBA again. (was: light dawns: serious bug in FE/BE protocol\n\thandling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Goran Thyni wrote:\n\n> Taral wrote:\n> > mico 2.2.6 has sufficient services support for CORBA to proceed. \n> \n> I know some people thinks mico is great but I have some concerns:\n\nWe are explicitly designing the system to allow any CORBA 2.2\nimplementation with the C++ mapping supported to be used.\n\nTaral\n\n", "msg_date": "Mon, 26 Apr 1999 13:06:37 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CORBA again. (was: light dawns: serious bug in FE/BE protocol\n\thandling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Goran Thyni wrote:\n\n> Why not put what you have in the CVS? The files in src/corba is pretty\n> old. I have planned to get into this because did some coding around\n> the FE/BE-protocol and would love to see a clean object-based\n> communication in PgSQL instead the current one. Let us see what you\n> have got and what is still missing in more lightweight alternatives to\n> MICO.\n\nWhat I have is not worth putting in the CVS, plus it is not finalized, nor\nis it working in any real way. (It's just a set of IDL files, attached.)\n\nThe most important thing we need support for is CORBA 2.2 itself, which\nwhen I started this, was only supported by CORBA. Also, I would like\nsupport for the LifeCycle service (I think), the Security service, the\nTransaction service, and maybe others I can't think of right now.\n\nTaral\n\n", "msg_date": "Mon, 26 Apr 1999 13:14:04 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CORBA again. (was: light dawns: serious bug in FE/BE protocol\n\thandling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Taral wrote:\n\n> We are explicitly designing the system to allow any CORBA 2.2\n> implementation with the C++ mapping supported to be used.\n\nGiven that PostgreSQL is a C project, why not use a C ORB like ORBit?\nIt's a whole lot faster than MICO...\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 26 Apr 1999 14:16:49 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Taral wrote:\n\n> The most important thing we need support for is CORBA 2.2 itself, which\n> when I started this, was only supported by CORBA. Also, I would like\n> support for the LifeCycle service (I think), the Security service, the\n> Transaction service, and maybe others I can't think of right now.\n\nI am almost done with the first version of the GNU Transaction Server.\nIt uses ORBit for its own needs, but there's no reason you couldn't use\nother ORBs as clients if they were appropriately modified. I plan on\nso modifying ORBit.\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 26 Apr 1999 14:18:20 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Taral wrote:\n\n> What I have is not worth putting in the CVS, plus it is not finalized, nor\n> is it working in any real way. (It's just a set of IDL files, attached.)\n\nHelps if I attach them.\n\nTaral", "msg_date": "Mon, 26 Apr 1999 13:20:36 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Todd Graham Lewis wrote:\n\n> Given that PostgreSQL is a C project, why not use a C ORB like ORBit?\n> It's a whole lot faster than MICO...\n\nDoes it fully support 2.2?\n\nTaral\n\n", "msg_date": "Mon, 26 Apr 1999 13:20:55 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Todd Graham Lewis wrote:\n\n> I am almost done with the first version of the GNU Transaction Server.\n> It uses ORBit for its own needs, but there's no reason you couldn't use\n> other ORBs as clients if they were appropriately modified. I plan on\n> so modifying ORBit.\n\nPostgreSQL uses a Berkeley license. That's one of the (many) reasons we\ndon't want to bundle any ORB into the package.\n\nDo we still have problems depending on GPL code?\n\nTaral\n\n", "msg_date": "Mon, 26 Apr 1999 13:25:07 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": ">>>>> \"t\" == Taral <[email protected]>\n>>>>> wrote the following on Mon, 26 Apr 1999 13:20:55 -0500 (CDT)\n\nt> On Mon, 26 Apr 1999, Todd Graham Lewis wrote:\n>> Given that PostgreSQL is a C project, why not use a C ORB like ORBit?\n>> It's a whole lot faster than MICO...\n\nt> Does it fully support 2.2?\n\nyes. And i've written a multi database package for the Gnome Project\nusing ORbit. Take a look at <http://www.lausch.at/gda> to find the\ndocumentation. You'll get the source from the gnome ftp server. It's\ncalled gnome-db and postgres is already a supported database.,\n\nt> Taral\n--\nMichael Lausch\nSee my web page <http://www.lausch.at/> or query PGP key server for PGP key.\n\"Reality is that which, when you stop believing in it, doesn't go away\".\n -- Philip K. Dick\n", "msg_date": "Mon, 26 Apr 1999 20:40:00 +0200", "msg_from": "Michael Lausch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, Taral wrote:\n\n> PostgreSQL uses a Berkeley license. That's one of the (many) reasons we\n> don't want to bundle any ORB into the package.\n> \n> Do we still have problems depending on GPL code?\n\nORBit is LGPL'd, so there's no license conflict. And you'd just be\nlinking against it, not really \"including\" it, depending on how formal\nyou want to be about it. The last time we talked about this, I think\nit was announced that ORBit would be acceptable, but that was a while\nago. Are there searchable archives of the Postgres list?\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 26 Apr 1999 14:45:51 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "\nI use KDE at home, as well as at work...KDE uses MICO...\n\nThe CORBA implemetaion, as discussed before, is going to have to be\n'wrappered' so that it isn't tied to any particular implementation...\n\nTaral was, I believe, planning on starting with MICO and then let's those\nusing other implementations build up from there...\n\n\nOn Mon, 26 Apr 1999, Michael Lausch wrote:\n\n> >>>>> \"t\" == Taral <[email protected]>\n> >>>>> wrote the following on Mon, 26 Apr 1999 13:20:55 -0500 (CDT)\n> \n> t> On Mon, 26 Apr 1999, Todd Graham Lewis wrote:\n> >> Given that PostgreSQL is a C project, why not use a C ORB like ORBit?\n> >> It's a whole lot faster than MICO...\n> \n> t> Does it fully support 2.2?\n> \n> yes. And i've written a multi database package for the Gnome Project\n> using ORbit. Take a look at <http://www.lausch.at/gda> to find the\n> documentation. You'll get the source from the gnome ftp server. It's\n> called gnome-db and postgres is already a supported database.,\n> \n> t> Taral\n> --\n> Michael Lausch\n> See my web page <http://www.lausch.at/> or query PGP key server for PGP key.\n> \"Reality is that which, when you stop believing in it, doesn't go away\".\n> -- Philip K. Dick\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Apr 1999 23:42:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "Well, I don't mind the idea of using ORBit, since it _is_ smaller. I'm\nlooking into it now. However, I do intend to wrapper the whole lot. The\nonly thing is to pick a required mapping, and I said (way back then) that\nI preferred C since it had substantially better handling of unions and the\nlike.\n\nTaral\n\nOn Mon, 26 Apr 1999, The Hermit Hacker wrote:\n\n> \n> I use KDE at home, as well as at work...KDE uses MICO...\n> \n> The CORBA implemetaion, as discussed before, is going to have to be\n> 'wrappered' so that it isn't tied to any particular implementation...\n> \n> Taral was, I believe, planning on starting with MICO and then let's those\n> using other implementations build up from there...\n> \n> \n> On Mon, 26 Apr 1999, Michael Lausch wrote:\n> \n> > >>>>> \"t\" == Taral <[email protected]>\n> > >>>>> wrote the following on Mon, 26 Apr 1999 13:20:55 -0500 (CDT)\n> > \n> > t> On Mon, 26 Apr 1999, Todd Graham Lewis wrote:\n> > >> Given that PostgreSQL is a C project, why not use a C ORB like ORBit?\n> > >> It's a whole lot faster than MICO...\n> > \n> > t> Does it fully support 2.2?\n> > \n> > yes. And i've written a multi database package for the Gnome Project\n> > using ORbit. Take a look at <http://www.lausch.at/gda> to find the\n> > documentation. You'll get the source from the gnome ftp server. It's\n> > called gnome-db and postgres is already a supported database.,\n> > \n> > t> Taral\n> > --\n> > Michael Lausch\n> > See my web page <http://www.lausch.at/> or query PGP key server for PGP key.\n> > \"Reality is that which, when you stop believing in it, doesn't go away\".\n> > -- Philip K. Dick\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n", "msg_date": "Mon, 26 Apr 1999 21:50:42 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "On Mon, 26 Apr 1999, The Hermit Hacker wrote:\n\n> I use KDE at home, as well as at work...KDE uses MICO...\n> \n> The CORBA implemetaion, as discussed before, is going to have to be\n> 'wrappered' so that it isn't tied to any particular implementation...\n\nThe language bindings for CORBA are completely standardized. The only\nthing you have to change are your header files and link options.\n\nThat said, the CORBA C bindings are different from the CORBA C++ bindings,\nand so you are going to have to chose which language to pursue. I suggest\nthat staying with C is the natural choice for PostgreSQL, and you have\ntwo very good free ORBs to chose from: ORBit and ILU. For C++, you\nhave one more real choice: OmniORB, MICO, and TAO. The two performance\nchamps are ORBit and TAO, which fortunately enough covers both of your\nlanguage choices. The language issue weighs heavily in my mind as a C\nprogrammer who doesn't want to play the popular-subset- of-the-month\ngame with C++. (I think it's templates this month, although virtual\nsomethings are making a comeback. But don't use multiple inheritance!\nThat was all the rage back in '95, but now only dorks use it!)\n\nPlus, if you code the Postgres part with ORBit, there's no reason why\nclients can't use MICO; that's the whole point.\n\n> Taral was, I believe, planning on starting with MICO and then let's those\n> using other implementations build up from there...\n\nIs all of the underlying glue work going to be done in C++?\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 26 Apr 1999 22:54:58 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" }, { "msg_contents": "(As I look at this again)\n\nOn Mon, 26 Apr 1999, Michael Lausch wrote:\n\n> yes. And i've written a multi database package for the Gnome Project\n> using ORbit. Take a look at <http://www.lausch.at/gda> to find the\n> documentation. You'll get the source from the gnome ftp server. It's\n> called gnome-db and postgres is already a supported database.,\n\nAhh.. Yeah, I intend to map the postgresql functionality onto COSS as much\nas possible, so I'm implementing the Query Service.\n\nTaral\n\n", "msg_date": "Tue, 27 Apr 1999 14:54:47 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: CORBA again. (was: light dawns: serious bug in\n\tFE/BE protocol handling)" } ]
[ { "msg_contents": "All bug reports should be submitted to [email protected] ...\n\n\nOn Sun, 25 Apr 1999, Clayton Cottingham wrote:\n\n> ok, i got an error message\n> \n> by the way if there is somewhere more appropriate to send these please\n> tell me\n> \n> hope it helps!\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org", "msg_date": "Mon, 26 Apr 1999 16:16:24 +0000", "msg_from": "Clayton Cottingham <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: 6.5beta]" } ]
[ { "msg_contents": "\n I can not manage to get strings with backtick characters into a\n postgres database using the DBD:Pg module. I've tried using the\n built in quote method and escaping the backticks with a\n regex.. still no joy.\n\n Am I missing something? Is this a bug? Is it with DBD:Pg, DBI or\n Postgres itself?\n\n Code snippet follows.. all replies appreciated.\n\n\n-darrell golliher\n\n\n\nThis is an except from a CGI program. If one of the form parameters\ncontains a backtick, then the update fails. Otherwise updates are\nsuccessful. Since everybody hitting my application wants to type \ncontractions like \"can't\", \" won't\", and \"shouldn't\" the script fails\na lot. :(\n\n\n\tmy $hacks = $hackday_dbh->quote(param('hacks'));\n\tmy $schedule = $hackday_dbh->quote(param('schedule'));\n\t$schedule =~ s/'/\\'/g;\n\tmy $stuff = $hackday_dbh->quote(param('stuff'));\n\tmy $shout = $hackday_dbh->quote(param('shout'));\n\tmy $attending = $hackday_dbh->quote(param('attending'));\n\n\tmy %cookiedata = cookie('coewos');\n\tmy $userid = $cookiedata{'userid'};\n\n\tmy $query = \"update data SET shout='$shout',hacks='$hacks',schedule='$schedule',attending='$attending',stuff='$stuff' where email ~* '$userid'\";\n\tmy $sth = $hackday_dbh->prepare($query);\n\t$sth->execute;\n\n\n", "msg_date": "26 Apr 1999 14:39:02 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Quoting backticks" } ]
[ { "msg_contents": "Hehe,\n\nThis is why I needed to pass it by the backend gurus :)\n\nThanks for pointing out these additional issues, Jan.\n\n-Ryan\n\n> > This is on the TODO list.\n> >\n> > I actually have a solution that seems to work fine, but I wanted to run it \npast\n> > the backend guru's after we have finished the 6.5 beta.\n> >\n> > Sorry I din't get it finished before the beta started.\n> >\n> > -Ryan\n> \n> I wonder how it does!\n> \n> Have the following:\n> \n> CREATE TABLE t1 (a int4, b text);\n> CREATE TABLE t2 (c int4, d text);\n> CREATE VIEW v2 AS SELECT DISTINCT ON c * FROM t2;\n> \n> Populate them with:\n> \n> t1:\n> 1 'one'\n> 1 'ena'\n> 2 'two'\n> 2 'thio'\n> 3 'three'\n> 3 'tria'\n> 4 'four'\n> 4 'tessera'\n> \n> t2:\n> 1 'I'\n> 1 'eins'\n> 2 'II'\n> 2 'zwei'\n> 3 'III'\n> 3 'drei'\n> \n> Now you do\n> \n> SELECT t1.a, t1.b, v2.d FROM t1, v2\n> WHERE t1.a = v2.c;\n> \n> Does that work and produce the correct results? Note that\n> there are more than one correct results. The DISTINCT SELECT\n> from t2 already has. But in any case, the above SELECT should\n> present 6 rows (all the rows of t1 from 1 to 33 in english\n> and greek) and column d must show either the roman or german\n> number.\n> \n> To make it more complicated, add table t3 and populate it\n> with more languages. Then setup\n> \n> CREATE VIEW v3 AS SELECT DISTINCT ON e * FROM t3;\n> \n> and expand the above SELECT to a join over t1, v2, v3.\n> \n> Finally, think about a view that is a DISTINCT SELECT over\n> multiple tables. Now you build another view as SELECT from\n> the first plus some other table and make the new view\n> DISTINCT again.\n> \n> The same kind of problem causes that views currently cannot\n> have ORDER BY or GROUP BY clauses. All these clauses can only\n> appear once per query, so there is no room where the rewrite\n> system can place multiple different ones. Implementing this\n> requires first dramatic changes to the querytree layout and I\n> think it needs subselecting RTE's too.\n> \n> \n> Sorry - Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #======================================== [email protected] (Jan Wieck) #\n> \n> \n", "msg_date": "Mon, 26 Apr 1999 12:57:27 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] create view as select distinct (fwd)" } ]
[ { "msg_contents": "Tom Lane wrote:\n\n> >> You might try building the backend with assert checking turned on\n> >> (--enable-cassert) to see if any problems are detected.\n>\n> > Ok, do I need to set something else? I get the following when I try to run\n> > the Java application:\n>\n> > TRAP: Bad Argument to Function Call(\"!(AllocSetContains(set, pointer)):\",\n> > File: \"aset.c\", Line: 292)\n>\n> > !(AllocSetContains(set, pointer)) (0) [No such file or directory]\n>\n> That looks like an assert check to me ... send it along to the hackers\n> list, because aset.c is outside what I know about the backend.\n> Can you provide a debugger backtrace at that point, by any chance?\n\n(ultra5, Solaris 7, cc: WorkShop Compilers 5.0 98/12/15 C 5.0, jdk1.2.1)\n\nI've configured the pgsql as:\n\n./configure --prefix=/opt/pgsql \\\n --with-tcl \\\n --with-includes=/opt/tcl_tk/include \\\n --with-tclconfig=/opt/tcl_tk/lib \\\n --with-template=solaris_sparc_cc \\\n --with-CC=cc \\\n --enable-cassert \\\n --with-perl\n\nThen started postmaster as:\n su bpm -c \"${PGSQLHOME}/bin/postmaster -i -d -D ${PGDATA} 2>&1 >\n${PGDATA}/trace.log\"\n\nThen ran a Java application to retrieve blobs from a database that I created and\npopulated. Table def is:\nCREATE TABLE item (item_num int PRIMARY KEY, item_picture oid, item_descr text,\nship_unit varchar(15), unit_price money, stock int)\n\npsql yields:\n\nmini_stores=> select * from item;\nitem_num|item_picture|item_descr\n|ship_unit |unit_price|stock\n--------+------------+--------------------------------------------------------------+----------+----------+-----\n\n 1| 18730|Maximum protection for high-mileage\nrunners |pair |$75.50 | 1000\n 2| 18745|Customize your mountain bike with extra-durable\ncrankset |each |$20.00 | 500\n 3| 18762|Long drive golf balls -fluorescent\nyellow |pack of 12|$50.00 | 200\n 4| 18780|Your first season's baseball\nglove |pair |$25.00 | 250\n 5| 18796|Minimum chin contact, feather-light, maximum protection\nhelmet|each |$35.50 | 50\n(5 rows)\n\n\nThe Java app is giving errors, so I compiled pgsql with ASSERT_CHECKING enabled,\nbut I get the following error:\n\nTRAP: Bad Argument to Function Call(\"!(AllocSetContains(set, pointer)):\", File:\n\"aset.c\", Line: 292)\n\n!(AllocSetContains(set, pointer)) (0) [No such file or directory]\n/opt/pgsql/bin/postmaster: reaping dead processes...\n/opt/pgsql/bin/postmaster: CleanupProc: pid 27783 exited with status 134\n/opt/pgsql/bin/postmaster: CleanupProc: reinitializing shared memory and\nsemaphores\n\n\nDo I need to configure something else?\n\nThe error, 134 is\nvlad: checkERR 132\n#define ENOBUFS 132 /* No buffer space available */\n\nThanks.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Mon, 26 Apr 1999 15:57:26 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: ERROR: index_rescan: invalid amrescan regproc ???" } ]
[ { "msg_contents": "This implies that the \"group by\" clause is not supported in views. I have\ncreated views that use the group by clause and they appear to work. I have\nnot verified the content of the records. I would like to know more about\nwhat Jan means when he says that \"group by\" is not supported in views? Does\nit mean that the content of the results could be unexpected or are they\nconditions where they may work and other conditions where they don't work?\nMore info would be greatly appreciated.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\[email protected] [SMTP:[email protected]]\n\tSent:\tMonday, April 26, 1999 9:35 AM\n\tTo:\[email protected]\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] create view as select distinct (fwd)\n\n\t>\n\t> This is on the TODO list.\n\t>\n\t> I actually have a solution that seems to work fine, but I wanted\nto run it past\n\t> the backend guru's after we have finished the 6.5 beta.\n\t>\n\t> Sorry I din't get it finished before the beta started.\n\t>\n\t> -Ryan\n\n\t I wonder how it does!\n\n\t Have the following:\n\n\t CREATE TABLE t1 (a int4, b text);\n\t CREATE TABLE t2 (c int4, d text);\n\t CREATE VIEW v2 AS SELECT DISTINCT ON c * FROM t2;\n\n\t Populate them with:\n\n\t t1:\n\t 1 'one'\n\t 1 'ena'\n\t 2 'two'\n\t 2 'thio'\n\t 3 'three'\n\t 3 'tria'\n\t 4 'four'\n\t 4 'tessera'\n\n\t t2:\n\t 1 'I'\n\t 1 'eins'\n\t 2 'II'\n\t 2 'zwei'\n\t 3 'III'\n\t 3 'drei'\n\n\t Now you do\n\n\t SELECT t1.a, t1.b, v2.d FROM t1, v2\n\t WHERE t1.a = v2.c;\n\n\t Does that work and produce the correct results? Note that\n\t there are more than one correct results. The DISTINCT SELECT\n\t from t2 already has. But in any case, the above SELECT should\n\t present 6 rows (all the rows of t1 from 1 to 33 in english\n\t and greek) and column d must show either the roman or german\n\t number.\n\n\t To make it more complicated, add table t3 and populate it\n\t with more languages. Then setup\n\n\t CREATE VIEW v3 AS SELECT DISTINCT ON e * FROM t3;\n\n\t and expand the above SELECT to a join over t1, v2, v3.\n\n\t Finally, think about a view that is a DISTINCT SELECT over\n\t multiple tables. Now you build another view as SELECT from\n\t the first plus some other table and make the new view\n\t DISTINCT again.\n\n\t The same kind of problem causes that views currently cannot\n\t have ORDER BY or GROUP BY clauses. All these clauses can only\n\t appear once per query, so there is no room where the rewrite\n\t system can place multiple different ones. Implementing this\n\t requires first dramatic changes to the querytree layout and I\n\t think it needs subselecting RTE's too.\n\n\n\tSorry - Jan\n\n\t--\n\n\t\n#======================================================================#\n\t# It's easier to get forgiveness for being wrong than for being\nright. #\n\t# Let's break this rule - forgive me.\n#\n\t#======================================== [email protected] (Jan\nWieck) #\n\n\t\n", "msg_date": "Mon, 26 Apr 1999 18:05:44 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] views and group by (formerly: create view as selec\n\tt distinct)" }, { "msg_contents": "Michael J Davis wrote:\n\n>\n> This implies that the \"group by\" clause is not supported in views. I have\n> created views that use the group by clause and they appear to work. I have\n> not verified the content of the records. I would like to know more about\n> what Jan means when he says that \"group by\" is not supported in views? Does\n> it mean that the content of the results could be unexpected or are they\n> conditions where they may work and other conditions where they don't work?\n> More info would be greatly appreciated.\n\n I tried to make it and it works partially. The problems arise\n if you have a view with a group by clause but do not select\n the attributes the group by clause uses:\n\n CREATE TABLE t1 (a int4, b int4);\n CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n\n SELECT count FROM v1;\n SELECT count(*) FROM v1;\n\n Both selects crash the backend!\n\n If you have a view that uses GROUP BY and do a simple SELECT\n * from it, then it will work and return the correct results.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Apr 1999 09:20:09 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I tried to make it and it works partially. The problems arise\n> if you have a view with a group by clause but do not select\n> the attributes the group by clause uses:\n\n> CREATE TABLE t1 (a int4, b int4);\n> CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n\n> SELECT count FROM v1;\n> SELECT count(*) FROM v1;\n\n> Both selects crash the backend!\n\nHmm, this sounds very similar to a problem I was looking at on Sunday:\n\nselect sum(quantity), ID+1 from aggtest1 group by ID+1;\nERROR: replace_agg_clause: variable not in target list\n\nThe error message is new as of Sunday; with code older than that this\nwill crash the backend. And, in fact, what I get from Jan's example\nabove is:\n\nSELECT count FROM v1;\nERROR: replace_agg_clause: variable not in target list\n\nIn both situations, it's necessary to add variables to the target list\nthat aren't in the list produced by the parser. We have code that does\nthat sort of thing, but it's evidently not getting applied...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Apr 1999 10:50:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> [email protected] (Jan Wieck) writes:\n> > I tried to make it and it works partially. The problems arise\n> > if you have a view with a group by clause but do not select\n> > the attributes the group by clause uses:\n>\n> > CREATE TABLE t1 (a int4, b int4);\n> > CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n>\n> > SELECT count FROM v1;\n> > SELECT count(*) FROM v1;\n>\n> > Both selects crash the backend!\n>\n> Hmm, this sounds very similar to a problem I was looking at on Sunday:\n>\n> select sum(quantity), ID+1 from aggtest1 group by ID+1;\n> ERROR: replace_agg_clause: variable not in target list\n>\n> The error message is new as of Sunday; with code older than that this\n> will crash the backend. And, in fact, what I get from Jan's example\n> above is:\n>\n> SELECT count FROM v1;\n> ERROR: replace_agg_clause: variable not in target list\n>\n> In both situations, it's necessary to add variables to the target list\n> that aren't in the list produced by the parser. We have code that does\n> that sort of thing, but it's evidently not getting applied...\n\n Yes, and the attributes could be marked junk so they are\n taken out of the final result again later. But I wouldn't\n spend time on it because I think it's an incomplete solution.\n\n Let's have a view doing a sum() over a field with a group by.\n The values are measured in meters. And there is another table\n with factors to convert between meters and inches, feet,\n yards.\n\n CREATE TABLE t1 (id serial, owner text, len float8);\n CREATE TABLE t2 (quant text, factor float8);\n CREATE VIEW v1 AS SELECT owner, sum(len) FROM t1 GROUP BY owner;\n\n Now you want the sums converted to any quantity and do a:\n\n SELECT a.owner, a.sum as meter, b.quant, a.sum * b.factor as size\n FROM v1 a, t2 b;\n\n Ooops - there's only one row per owner left. And more OOOPS -\n it has sum()*count(* from t2) as meters! You must explicitly\n tell \"GROUP BY a.owner, b.quant\" to get the correct result.\n This is a case, where IMHO nothing else than a subselecting\n RTE could help. The problem in this case is that the rewrite\n system would have to add another attribute to the group by\n clause which is already there. But I see absolutely no way\n how it could decide which one. And there might be cases where\n totally no grouping could produce the correct result.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Apr 1999 18:24:23 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec" }, { "msg_contents": "[email protected] (Jan Wieck) wrote:\n> CREATE TABLE t1 (a int4, b int4);\n> CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n> SELECT count FROM v1;\n> [ ... ker-boom ... ]\n\nand I said I thought it was the same planner bug I was chasing in\nnon-VIEW-using examples of GROUP BY. It turns out it's not the same.\nIn the above example, the breakage happens in rewrite before the\nplanner ever sees it. When make_groupPlan is called, it sees a\nGroupClause like this:\n\n(gdb) p *((GroupClause *) 0x400b8690)->entry\n$107 = {type = T_TargetEntry, resdom = 0x400b86c0, fjoin = 0x0,\n expr = 0x400b8700}\n(gdb) p *((GroupClause *) 0x400b8690)->entry->resdom\n$108 = {type = T_Resdom, resno = 1, restype = 23, restypmod = -1,\n resname = 0x400b86e8 \"b\", reskey = 0, reskeyop = 0, resjunk = 0}\n(gdb) p *(Var*)((GroupClause *) 0x400b8690)->entry->expr\n$114 = {type = T_Var, varno = 4, varattno = 2, vartype = 23, vartypmod = -1,\n varlevelsup = 0, varnoold = 4, varoattno = 2}\n\nand a target list like this:\n\n(gdb) p *(TargetEntry*)0x400b8a70\n$118 = {type = T_TargetEntry, resdom = 0x400b8a88, fjoin = 0x0,\n expr = 0x400b8ac8}\n(gdb) p *((TargetEntry*)0x400b8a70)->resdom\n$119 = {type = T_Resdom, resno = 1, restype = 23, restypmod = -1,\n resname = 0x400b8ab0 \"count\", reskey = 0, reskeyop = 0, resjunk = 0}\n(gdb) p *(Aggref*)((TargetEntry*)0x400b8a70)->expr\n$121 = {type = T_Aggref, aggname = 0x400b8af0 \"count\", basetype = 0,\n aggtype = 23, target = 0x400b8b08, aggno = 0, usenulls = 0 '\\000'}\n(gdb) p *(Var*)((Aggref*)((TargetEntry*)0x400b8a70)->expr)->target\n$123 = {type = T_Var, varno = 4, varattno = 2, vartype = 23, vartypmod = -1,\n varlevelsup = 0, varnoold = 4, varoattno = 2}\n\nwhich is all fine except that the two different expressions have been\ngiven the same Resdom number (resno = 1 in both). That confuses\nmake_groupPlan into thinking that they are the same expression, and\ntrouble ensues.\n\nIf I understand this stuff correctly, the rewriter should have been\ncareful to assign different Resdom numbers to distinct expressions\nin the target and group-by lists of the rewritten query. That's how\nthings look in an un-rewritten query, anyway. So I think this is a\nrewrite bug.\n\nIf you don't like that answer, it might be possible to change the\nplanner so that it doesn't put any faith in the Resdom numbers, but\nuses equal() on the expr fields to decide whether target and group-by\nentries are the same. That would be a slower but probably much more\nrobust approach. Jan, what do you think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 19:51:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec " }, { "msg_contents": "\nI now get. I am sure it instills confidence in our users:\n\t\n\ttest=> CREATE TABLE t1 (a int4, b int4);\n\tCREATE\n\ttest=> \n\ttest=> CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n\tCREATE\n\ttest=> SELECT count FROM v1;\n\tERROR: union_planner: query is marked hasAggs, but I don't see any\n\n\n> [email protected] (Jan Wieck) wrote:\n> > CREATE TABLE t1 (a int4, b int4);\n> > CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n> > SELECT count FROM v1;\n> > [ ... ker-boom ... ]\n> \n> and I said I thought it was the same planner bug I was chasing in\n> non-VIEW-using examples of GROUP BY. It turns out it's not the same.\n> In the above example, the breakage happens in rewrite before the\n> planner ever sees it. When make_groupPlan is called, it sees a\n> GroupClause like this:\n> \n> (gdb) p *((GroupClause *) 0x400b8690)->entry\n> $107 = {type = T_TargetEntry, resdom = 0x400b86c0, fjoin = 0x0,\n> expr = 0x400b8700}\n> (gdb) p *((GroupClause *) 0x400b8690)->entry->resdom\n> $108 = {type = T_Resdom, resno = 1, restype = 23, restypmod = -1,\n> resname = 0x400b86e8 \"b\", reskey = 0, reskeyop = 0, resjunk = 0}\n> (gdb) p *(Var*)((GroupClause *) 0x400b8690)->entry->expr\n> $114 = {type = T_Var, varno = 4, varattno = 2, vartype = 23, vartypmod = -1,\n> varlevelsup = 0, varnoold = 4, varoattno = 2}\n> \n> and a target list like this:\n> \n> (gdb) p *(TargetEntry*)0x400b8a70\n> $118 = {type = T_TargetEntry, resdom = 0x400b8a88, fjoin = 0x0,\n> expr = 0x400b8ac8}\n> (gdb) p *((TargetEntry*)0x400b8a70)->resdom\n> $119 = {type = T_Resdom, resno = 1, restype = 23, restypmod = -1,\n> resname = 0x400b8ab0 \"count\", reskey = 0, reskeyop = 0, resjunk = 0}\n> (gdb) p *(Aggref*)((TargetEntry*)0x400b8a70)->expr\n> $121 = {type = T_Aggref, aggname = 0x400b8af0 \"count\", basetype = 0,\n> aggtype = 23, target = 0x400b8b08, aggno = 0, usenulls = 0 '\\000'}\n> (gdb) p *(Var*)((Aggref*)((TargetEntry*)0x400b8a70)->expr)->target\n> $123 = {type = T_Var, varno = 4, varattno = 2, vartype = 23, vartypmod = -1,\n> varlevelsup = 0, varnoold = 4, varoattno = 2}\n> \n> which is all fine except that the two different expressions have been\n> given the same Resdom number (resno = 1 in both). That confuses\n> make_groupPlan into thinking that they are the same expression, and\n> trouble ensues.\n> \n> If I understand this stuff correctly, the rewriter should have been\n> careful to assign different Resdom numbers to distinct expressions\n> in the target and group-by lists of the rewritten query. That's how\n> things look in an un-rewritten query, anyway. So I think this is a\n> rewrite bug.\n> \n> If you don't like that answer, it might be possible to change the\n> planner so that it doesn't put any faith in the Resdom numbers, but\n> uses equal() on the expr fields to decide whether target and group-by\n> entries are the same. That would be a slower but probably much more\n> robust approach. Jan, what do you think?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:51:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I now get. I am sure it instills confidence in our users:\n> \tERROR: union_planner: query is marked hasAggs, but I don't see any\n\nYes :-(. I've been waiting for Jan to respond to the issue --- I think\nthis is a rewriter problem, so I wanted to know whether he could do\nanything with it. (See my message \"GROUP BY fixes committed\" dated\n02 May 1999 20:54:30 -0400.)\n\nIt'd be possible to work around this problem inside the planner, by not\nbelieving what the rewriter says about either resnos or hasAggs, but\nthat seems like a kluge rather than a fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 May 1999 13:31:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec " }, { "msg_contents": ">\n> Bruce Momjian <[email protected]> writes:\n> > I now get. I am sure it instills confidence in our users:\n> > ERROR: union_planner: query is marked hasAggs, but I don't see any\n>\n> Yes :-(. I've been waiting for Jan to respond to the issue --- I think\n> this is a rewriter problem, so I wanted to know whether he could do\n> anything with it. (See my message \"GROUP BY fixes committed\" dated\n> 02 May 1999 20:54:30 -0400.)\n>\n> It'd be possible to work around this problem inside the planner, by not\n> believing what the rewriter says about either resnos or hasAggs, but\n> that seems like a kluge rather than a fix.\n\n Sorry - forgot about that one.\n\n I think the best place to check if the query has aggregates\n or not is at the beginning of the planner. The rewrite system\n is recursive, and thus the check at the end of one cycle\n doesn't guarantee that it will still be true at the end of\n all rewrites.\n\n OTOH views with aggregates are very buggy and introducing\n ton's of problems (as the other threads show). I'm not sure\n that it was a good idea to make it partially working :-( for\n v6.4. There was only the advice that they shouldn't be used.\n Now it's a released feature. More and more pressure for the\n subselecting RTE's which are IMHO the only way to solve that\n all cleanly.\n\n The other issue about the resno's is something that I must\n still search for in the rewriter. There are bad problems\n where it seems to mangle up not only resno's, it also get's\n lost of varlevelsup somehow and producing totally wrong\n varno's into subselects. Still problems coming from the\n EXCEPT/INTERSECT patch I haven't found so far >:-{\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 20:01:24 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] views and group by (formerly: create view as selec" } ]
[ { "msg_contents": "\nMoved to hackers. It's probably the best place. As to the comparison,\nI just started looking into that this mourning with a question from\nsomeone else - with luck we'll have something there around the 6.5 release.\n\n\nOn 27-Apr-99 Joel Shellman wrote:\n> Why don't you have a comparison with MySQL in your chart? I guess I ran\n> into MySQL first, so perhaps it's a result of my experience, but I've\n> seen MySQL as the closest thing to PostgreSQL's market as they are both\n> open source (to a certain extent) and freely available.\n> \n> At least do you have any idea how fast postgresql is compared to MySQL?\n> \n> Also, it says that postgresql is not multithreaded--what exactly does\n> that mean? Does that mean it can only handle one query at a time? That\n> seems very strange.\n> \n> Thank you,\n> -- \n> Joel Shellman\n> knOcean Interactive Corporation\n> http://corp.knOcean.com/\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Mon, 26 Apr 1999 21:07:01 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Mysql comparison" }, { "msg_contents": "\nOn Mon, 26 Apr 1999, Vince Vielhaber wrote:\n\n...\n> > Also, it says that postgresql is not multithreaded--what exactly does\n> > that mean? Does that mean it can only handle one query at a time? That\n> > seems very strange.\n\n There is more than one way of doing more than one thing at a time.\nMultithreading is one way, and multiprocessing is another.\n\n BTW, even though MySQL is multithreaded, any thread that modifies a\ntable (update, delete, insert) will block all other threads on that table\nuntil it completes.\n\n Therefore, multithreading or multiprocessing has little to do with any\nparallelism a rdms may utilize. You have to look deeper.\n \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n\nTom\n\n", "msg_date": "Mon, 26 Apr 1999 20:44:37 -0700 (PDT)", "msg_from": "Tom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Mysql comparison" }, { "msg_contents": "> Moved to hackers. It's probably the best place. As to the comparison,\n> I just started looking into that this mourning with a question from\n> someone else - with luck we'll have something there around the 6.5 release.\n> > Why don't you have a comparison with MySQL in your chart? I guess I ran\n> > into MySQL first, so perhaps it's a result of my experience, but I've\n> > seen MySQL as the closest thing to PostgreSQL's market as they are both\n> > open source (to a certain extent) and freely available.\n\nafaik MySQL is freely available for non-commercial use only.\n\n> > At least do you have any idea how fast postgresql is compared to MySQL?\n\nSince MySQL does not provide transactions, perhaps the most\nfundamental *required* feature for a relational database, it may be\nfaster for simple queries on small databases. Since Postgres has an\noptimizer, transactions, etc. it should perform better on large\nqueries and on complex transactions. But ymmv.\n\n> > Also, it says that postgresql is not multithreaded--what exactly does\n> > that mean? Does that mean it can only handle one query at a time? That\n> > seems very strange.\n\n... And intentionally misleading. What it means is that the MySQL\nfolks are providing disinformation and have no apparent interest in\ndoing otherwise. Since it is a commercial product, you had better be\nready for marketing BS from them.\n\nThe last time I looked, their \"features comparison\", labeled \"Crash\nMe\" !, had a 5-20% error rate on the facts, and that is only for those\nPostgres features I was familiar with. Who knows how correct the\ncomparisons are for other DBs?\n\nCheck the archives for previous discussions...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Apr 1999 13:52:56 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Mysql comparison" }, { "msg_contents": "Hello!\n\nOn Tue, 27 Apr 1999, Thomas Lockhart wrote:\n> afaik MySQL is freely available for non-commercial use only.\n\n No, MySQL is free for any use. The only exception is you cannot bundle\nit with a commercial product.\n\n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Tue, 27 Apr 1999 18:02:27 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Mysql comparison" }, { "msg_contents": "> You totally missed on this one. It was on the POSTGRESQL SITE that said\n> that postgresql is not multithreaded! On the comparison chart comparing\n> postgresql with other databases, oracle and sybase are the only ones\n> listed as multithreaded.\n\nIs my face red :(\n\nI'll save the MySQL diatribe for later...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Apr 1999 14:42:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Mysql comparison" }, { "msg_contents": "> > You totally missed on this one. It was on the POSTGRESQL SITE that said\n> > that postgresql is not multithreaded! On the comparison chart comparing\n> > postgresql with other databases, oracle and sybase are the only ones\n> > listed as multithreaded.\n> \n> Is my face red :(\n> \n> I'll save the MySQL diatribe for later...\n\nMy experience is that MySQL has gotten better at being more honest.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 12:25:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Mysql comparison" } ]
[ { "msg_contents": "I have a table in which the first four variables are the unique primary\nkey. Recently, I ran a vacuum and was told that the number of tuples did\nnot equal the number in the heap for the primary key index to this\ntable. I'm not exactly sure whatn this means. It sounds like one or more\nof the \"unique\" keys may have more than one set of data.\n\nI often update the table using a text file containing the \"UPDATE\ntable_name SET ...\" and running the \\i command in psql to execute the\ncommands from the text file. For some of these updates, I update an\nvariable length array that sometimes is longer than 8K. So what I do is\njust update the first 100 or so positions in that array and get the\nremaining positions with another update. I'm wondering if somehow there\nis a bug that doesn't like this a ocassionally screws up the indexing.\n\nAnyone know what this error means? I've tried dropping the index and\nre-creating it but still get the same error on a vacuum.\n\n\nThanks.\n-Tony\n\n\n", "msg_date": "Mon, 26 Apr 1999 18:18:12 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Number of tuples (20300) not the same as heap (20301)" } ]
[ { "msg_contents": "I see this problem all the time too. There are some scary bugs in the bowels\nof the code that controls indexes and primary keys.\n\nAt this point, I have like 4000 duplicated primary keys, and I cannot update\nsections of the table due to key violations.\n\nTim Perdue\nPHPBuilder.com / GotoCity.com / Geocrawler.com\n\n\n-----Original Message-----\nFrom: G. Anthony Reina <[email protected]>\nTo: [email protected] <[email protected]>\nDate: Monday, April 26, 1999 8:14 PM\nSubject: [HACKERS] Number of tuples (20300) not the same as heap (20301)\n\n\n>I have a table in which the first four variables are the unique primary\n>key. Recently, I ran a vacuum and was told that the number of tuples did\n>not equal the number in the heap for the primary key index to this\n>table. I'm not exactly sure whatn this means. It sounds like one or more\n>of the \"unique\" keys may have more than one set of data.\n>\n>I often update the table using a text file containing the \"UPDATE\n>table_name SET ...\" and running the \\i command in psql to execute the\n>commands from the text file. For some of these updates, I update an\n>variable length array that sometimes is longer than 8K. So what I do is\n>just update the first 100 or so positions in that array and get the\n>remaining positions with another update. I'm wondering if somehow there\n>is a bug that doesn't like this a ocassionally screws up the indexing.\n>\n>Anyone know what this error means? I've tried dropping the index and\n>re-creating it but still get the same error on a vacuum.\n>\n>\n>Thanks.\n>-Tony\n>\n>\n\n", "msg_date": "Mon, 26 Apr 1999 20:24:18 -0500", "msg_from": "\"Tim Perdue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of tuples (20300) not the same as heap (20301)" }, { "msg_contents": "Tim Perdue wrote:\n\n> I see this problem all the time too. There are some scary bugs in the bowels\n> of the code that controls indexes and primary keys.\n>\n> At this point, I have like 4000 duplicated primary keys, and I cannot update\n> sections of the table due to key violations.\n>\n> Tim Perdue\n> PHPBuilder.com / GotoCity.com / Geocrawler.com\n>\n\nMy co-worker fixed the table in question but the fix was kind of kludgy. He\nperformed a select on the table and outputted it to a text file. Then he wrote\na C program to search the text file for cases of two or more datasets in any\nprimary key. He then deleted the extra datasets with this search list. Lastly,\nhe vacuumed the corrected table. (Apparently, our searching on this table was\nextremely slow because the vacuum doesn't work if you get this type of error).\n\nMaybe you could find a similar fix for your table.\n\nHopefully, some of the hackers will be able to find the problem in the code.\nOtherwise, the db works very well.\n-Tony\n\n\n", "msg_date": "Tue, 27 Apr 1999 10:16:16 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of tuples (20300) not the same as heap (20301)" } ]
[ { "msg_contents": "In article <[email protected]>,\nBruce Momjian <[email protected]> wrote:\n>> > Is the LIMIT feature very efficient? I want to start using it for quite\n>> > a few things, but I'm wondering, what happens when I have a zillion\n>> > records and I want the first 10, is that going to be an efficient thing\n>> > to do?\n>> \n>> \tI am curious about this myself. As far as I can tell, it doesn't\n>> give anything that cursors don't provide, but introduces more \"features\"\n>> into the parser. Do we need this?\n>\n>This is pretty correct, though it stops the executor from completing all\n>the result queries, while cursors don't. The complete the entire query\n>and store the result for later fetches.. We support it because MySQL\n>users and others asked for it.\n\nIt is a nice touch for web interfaces that are going to display\nso many records for a request and not maintain any state between\nrequests.\n\n Les Mikesell\n [email protected]\n", "msg_date": "26 Apr 1999 20:53:23 -0500", "msg_from": "[email protected] (Leslie Mikesell)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Efficiency of LIMIT ?" } ]
[ { "msg_contents": "On Fri, 23 Apr 1999, Vadim Mikheev wrote:\n\n> Still only in my head -:)\n> I'll write something in the next week.\n\nI'm writing an article on PostgreSQL right now touting MVCC, and it sure\nwould be nice to have docs available. I'm over deadline and almost done,\nso it's probably too late at this point, but it would still be good to\nhave docs available, since I hope with my article to send interested\npeople to the web page looking for more info.\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 26 Apr 1999 23:25:11 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] MVCC Question" } ]
[ { "msg_contents": "Hello all,\n\nThe following example causes the hang of concurrent \ntransactions(99/04/26 snapshot).\n\nsession-1 => create table tt (id int4);\n\nsession-1 => begin;\nsession-1 => insert into tt values (1);\n\nsession-2 => begin;\nsession-2 => insert into tt values (2);\n\nsession-3 => begin;\nsession-3 => lock table tt;\n\t\t(blocked)\n\nsession-1 => update tt set id=1 where id=1;\n\t\t(blocked)\n\nsession-2 => end;\n\nsession-2 returns immediately,but session-3 and session-1 \nare still blocked \n\n\nThis phenomenon seems to be caused by LockResolveCon\nflicts() or DeadLockCheck().\nBoth session-1 and session-2 acquire RowExclusive locks \nby insert operations(InitPlan() in execMain.c). \nThe AccessExclusive lock of session-3 is queued waiting \nfor the release of above locks.\nWhen the update operation of session-1 is executed,the \nsecond RowExclusive lock is rejected by LockResolve\nConflicts() and queued after the AccessExclusive lock \nof session-3. \nThe state is like deadlock but DeadLockCheck() doesn't \nregard the state as deadlock.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 27 Apr 1999 12:49:13 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Lock freeze ? in MVCC" }, { "msg_contents": "OK, let me comment on this. It does not to see this as a deadlock\nbecause session 3 really doesn't have a lock at the point it is hanging.\nA deadlock would be if 1 has a lock that 3 is waiting for, and 3 has a\nlock 1 is waiting for.\n\nHold on, I think I see what you are saying now. It seems the locking\ncode assume table-level locking, while the new code now has MVCC. I\nbetter look at this. This could be ugly to fix. I look for matching\nlock structure pointers in different backends(lock.c), but now I see\nthat 1 and 2 both are waiting for table tt, but they have different\nlocks structures, because they are different types of locks. Yikes. \nMaybe I can hack something in there, but I can't imagine how yet. Maybe\nVadim will have a hint.\n\n---------------------------------------------------------------------------\n\n\nsession-1 => create table tt (id int4);\n\nsession-1 => begin;\nsession-1 => insert into tt values (1);\n\nsession-2 => begin;\nsession-2 => insert into tt values (2);\n\nsession-3 => begin;\nsession-3 => lock table tt;\n\t\t(blocked)\n\nsession-1 => update tt set id=1 where id=1;\n\t\t(blocked)\n\nsession-2 => end;\n\nsession-2 returns immediately,but session-3 and session-1 \nare still blocked \n\n\nThis phenomenon seems to be caused by LockResolveCon\nflicts() or DeadLockCheck().\nBoth session-1 and session-2 acquire RowExclusive locks \nby insert operations(InitPlan() in execMain.c). \nThe AccessExclusive lock of session-3 is queued waiting \nfor the release of above locks.\nWhen the update operation of session-1 is executed,the \nsecond RowExclusive lock is rejected by LockResolve\nConflicts() and queued after the AccessExclusive lock \nof session-3. \nThe state is like deadlock but DeadLockCheck() doesn't \nregard the state as deadlock.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 00:32:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, April 27, 1999 1:33 PM\n> To: Hiroshi Inoue\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Lock freeze ? in MVCC\n>\n>\n> OK, let me comment on this. It does not to see this as a deadlock\n> because session 3 really doesn't have a lock at the point it is hanging.\n> A deadlock would be if 1 has a lock that 3 is waiting for, and 3 has a\n> lock 1 is waiting for.\n>\n> Hold on, I think I see what you are saying now. It seems the locking\n> code assume table-level locking, while the new code now has MVCC. I\n> better look at this. This could be ugly to fix. I look for matching\n\nI think it's a problem of table-level locking(MVCC has 8 levels of table-\nlocking and even select operations acquire AccessShareLock's.)\nMoreover I found it's not the problem of MVCC only.\nIn fact I found the following case in 6.4.2.\n\nsession-1 => create table tt (id int4);\n\nsession-1 => begin;\nsession-1 => select * from tt;\n\nsession-2 => begin;\nsession-2 => select * from tt;\n\nsession-3 => begin;\nsession-3 => lock table tt;\n \t\t(blocked)\n\nsession-1 => select * from tt;\n \t\t(blocked)\n\nsession-2 => end;\n\nsession-2 returns immediately,but session-3 and session-1\nare still blocked\n\nNow I'm suspicious about the following code in LockResolveConflicts().\n\n /*\n * We can control runtime this option. Default is lockReadPriority=0\n */\n if (!lockReadPriority)\n {\n /* ------------------------\n * If someone with a greater priority is waiting for the\nlock,\n * do not continue and share the lock, even if we can. bjm\n * ------------------------\n */\n int myprio =\nLockMethodTable[lockmethod]->ct\nl->prio[lockmode];\n PROC_QUEUE *waitQueue = &(lock->waitProcs);\n PROC *topproc = (PROC *)\nMAKE_PTR(waitQueue->links.prev);\n\n if (waitQueue->size && topproc->prio > myprio)\n {\n XID_PRINT(\"LockResolveConflicts: higher priority\nproc wa\niting\",\n result);\n return STATUS_FOUND;\n }\n }\n\n\nAfter I removed above code on trial,select operations in my example case\nare not blocked.\n\nComments ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Tue, 27 Apr 1999 19:24:07 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> Now I'm suspicious about the following code in LockResolveConflicts().\n> \n> /*\n> * We can control runtime this option. Default is lockReadPriority=0\n> */\n> if (!lockReadPriority)\n> {\n> /* ------------------------\n> * If someone with a greater priority is waiting for the\n> lock,\n> * do not continue and share the lock, even if we can. bjm\n> * ------------------------\n\nYou're right Hiroshi - this must be changed:\n\nif we already have some lock with priority X and new requested\nlock has priority Y, Y <= X, then lock must be granted.\n\nAlso, I would get rid of lockReadPriority stuff...\n\nBruce, what do you think?\n\nVadim\n", "msg_date": "Tue, 27 Apr 1999 18:48:30 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "> Hiroshi Inoue wrote:\n> > \n> > Now I'm suspicious about the following code in LockResolveConflicts().\n> > \n> > /*\n> > * We can control runtime this option. Default is lockReadPriority=0\n> > */\n> > if (!lockReadPriority)\n> > {\n> > /* ------------------------\n> > * If someone with a greater priority is waiting for the\n> > lock,\n> > * do not continue and share the lock, even if we can. bjm\n> > * ------------------------\n> \n> You're right Hiroshi - this must be changed:\n> \n> if we already have some lock with priority X and new requested\n> lock has priority Y, Y <= X, then lock must be granted.\n> \n> Also, I would get rid of lockReadPriority stuff...\n> \n> Bruce, what do you think?\n\nThis sounds correct. I thought I needed to have the queue ordering\nchanged so that row-level locks are queued before table-level locks,\nbecause there could be cases of lock escalation from row-level to\ntable-level.\n\nHowever, it seems the problem is that readers don't share locks if\nwriters are waiting. With table-level locks, you never escalated a read\nlock because you had already locked the entire table, while now you do. \nPerhaps we can tell the system not to share read locks unless you are\nsharing your own lock due to a lock escalation.\n\nlockReadPriority() was probably added by Massimo to disable this \"don't\nshare a readlock if another writer is waiting\" behavior. While\ndisabling this behavior my be useful in some cases, but in the general\ncase may be a problem of starving writers if there are too many readers.\n\nHowever, it is my understanding that we don't have readers sleeping on\nlocks anymore, but I may be wrong.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 12:19:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > if we already have some lock with priority X and new requested\n> > lock has priority Y, Y <= X, then lock must be granted.\n> >\n> > Also, I would get rid of lockReadPriority stuff...\n> >\n> > Bruce, what do you think?\n> \n> This sounds correct. I thought I needed to have the queue ordering\n> changed so that row-level locks are queued before table-level locks,\n> because there could be cases of lock escalation from row-level to\n> table-level.\n> \n> However, it seems the problem is that readers don't share locks if\n> writers are waiting. With table-level locks, you never escalated a read\n> lock because you had already locked the entire table, while now you do.\n> Perhaps we can tell the system not to share read locks unless you are\n> sharing your own lock due to a lock escalation.\n\nThere is no row-level locks: all locks over tables are\ntable-level ones, btree & hash use page-level locks, but\nnever do page->table level lock escalation.\n\nHowever, I'm not sure that proposed changes will help in the next case:\n\nsession-1 => begin;\nsession-1 => insert into tt values (1);\t--RowExclusiveLock\n\nsession-2 => begin;\nsession-2 => insert into tt values (2);\t--RowExclusiveLock\n\nsession-3 => begin;\nsession-3 => lock table tt;\t\t\t--AccessExclusiveLock\n (conflicts with 1 & 2)\n ^\nsession-1 => lock table tt in share mode;\t--ShareLock\n (conflicts with 2 & 3)\n ^\nThis is deadlock situation and must be handled by\nDeadLockCheck().\n\nVadim\n", "msg_date": "Wed, 28 Apr 1999 11:37:27 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> > \n> > >\n> > > if we already have some lock with priority X and new requested\n> > > lock has priority Y, Y <= X, then lock must be granted.\n> > >\n> > > Also, I would get rid of lockReadPriority stuff...\n> > >\n> > > Bruce, what do you think?\n> > \n> > This sounds correct. I thought I needed to have the queue ordering\n> > changed so that row-level locks are queued before table-level locks,\n> > because there could be cases of lock escalation from row-level to\n> > table-level.\n> > \n> > However, it seems the problem is that readers don't share locks if\n> > writers are waiting. With table-level locks, you never escalated a read\n> > lock because you had already locked the entire table, while now you do.\n> > Perhaps we can tell the system not to share read locks unless you are\n> > sharing your own lock due to a lock escalation.\n> \n> There is no row-level locks: all locks over tables are\n> table-level ones, btree & hash use page-level locks, but\n> never do page->table level lock escalation.\n> \n> However, I'm not sure that proposed changes will help in the next case:\n> \n> session-1 => begin;\n> session-1 => insert into tt values (1);\t--RowExclusiveLock\n> \n> session-2 => begin;\n> session-2 => insert into tt values (2);\t--RowExclusiveLock\n> \n> session-3 => begin;\n> session-3 => lock table tt;\t\t\t--AccessExclusiveLock\n> (conflicts with 1 & 2)\n> ^\n> session-1 => lock table tt in share mode;\t--ShareLock\n> (conflicts with 2 & 3)\n> ^\n> This is deadlock situation and must be handled by\n> DeadLockCheck().\n\nOK, I think the problem with the code is that I am preventing a process\nfrom getting a lock if there is a process of higher priority waiting(a\nwriter).\n\nHowever, I never check to see if the current process already holds some\nkind of lock on the table. If I change the code so this behaviour will\nbe prevented if the process already holds a lock on the table, would\nthat fix it? In fact, maybe I should always allow it to get the lock if\nit holds any other locks. This should prevent some deadlocks. It would\nput processes at the end of the queue only if they already have no\nlocks, which I think makes sense, because putting him at the end of the\nqueue means all his locks are kept while he sits in the queue.\n\nComments? The fix would be easy, and I think it would make sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 23:56:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Wednesday, April 28, 1999 12:37 PM\n> To: Bruce Momjian\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] Lock freeze ? in MVCC\n>\n>\n> Bruce Momjian wrote:\n> >\n> > >\n> > > if we already have some lock with priority X and new requested\n> > > lock has priority Y, Y <= X, then lock must be granted.\n> > >\n> > > Also, I would get rid of lockReadPriority stuff...\n> > >\n> > > Bruce, what do you think?\n> >\n> > This sounds correct. I thought I needed to have the queue ordering\n> > changed so that row-level locks are queued before table-level locks,\n> > because there could be cases of lock escalation from row-level to\n> > table-level.\n> >\n> > However, it seems the problem is that readers don't share locks if\n> > writers are waiting. With table-level locks, you never escalated a read\n> > lock because you had already locked the entire table, while now you do.\n> > Perhaps we can tell the system not to share read locks unless you are\n> > sharing your own lock due to a lock escalation.\n>\n> There is no row-level locks: all locks over tables are\n> table-level ones, btree & hash use page-level locks, but\n> never do page->table level lock escalation.\n>\n> However, I'm not sure that proposed changes will help in the next case:\n>\n> session-1 => begin;\n> session-1 => insert into tt values (1);\t--RowExclusiveLock\n>\n> session-2 => begin;\n> session-2 => insert into tt values (2);\t--RowExclusiveLock\n>\n> session-3 => begin;\n> session-3 => lock table tt;\t\t\t--AccessExclusiveLock\n> (conflicts with 1 & 2)\n> ^\n> session-1 => lock table tt in share mode;\t--ShareLock\n> (conflicts with 2 & 3)\n> ^\n> This is deadlock situation and must be handled by\n> DeadLockCheck().\n>\n\nIt's really a deadlock ?\nCertainly end/abort of session-2 doesn't wakeup session-1/session3.\nI think it's due to the following code in ProcLockWakeup().\n\n while ((queue_size--) && (proc))\n {\n\n /*\n * This proc will conflict as the previous one did, don't\neven\n * try.\n */\n if (proc->token == last_locktype)\n continue;\n\n /*\n * This proc conflicts with locks held by others, ignored.\n */\n if (LockResolveConflicts(lockmethod,\n lock,\n\nproc->token,\n proc->xid,\n\n(XIDLookupEnt *\n) NULL) != STATUS_OK)\n {\n last_locktype = proc->token;\n continue;\n }\n\nOnce LockResolveConflicts() doesn't return STATUS_OK,proc\nis not changed and only queue_size-- is executed(never try\nto wakeup other procs).\n\nAfter inserting the code such as\n\tproc = (PROC *) MAKE_PTR(proc->links.prev);\nbefore continue statements,ProcLockWakeup() triggerd\nby end/abort of session-2 could try to wakeup session-1.\n\nComments ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 28 Apr 1999 14:42:10 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Tuesday, April 27, 1999 7:49 PM\n> To: Hiroshi Inoue\n> Cc: Bruce Momjian; [email protected]\n> Subject: Re: [HACKERS] Lock freeze ? in MVCC\n>\n>\n> Hiroshi Inoue wrote:\n> >\n> > Now I'm suspicious about the following code in LockResolveConflicts().\n> >\n> > /*\n> > * We can control runtime this option. Default is\n> lockReadPriority=0\n> > */\n> > if (!lockReadPriority)\n> > {\n> > /* ------------------------\n> > * If someone with a greater priority is waiting for the\n> > lock,\n> > * do not continue and share the lock, even if\n> we can. bjm\n> > * ------------------------\n>\n> You're right Hiroshi - this must be changed:\n>\n> if we already have some lock with priority X and new requested\n> lock has priority Y, Y <= X, then lock must be granted.\n>\n> Also, I would get rid of lockReadPriority stuff...\n>\n\nI found a problem to get rid of lockReadPriority stuff completely.\nIf there's a table which is insert/update/deleted very frequenly by\nseveral processes,processes which request the high priority lock\n(such as vacuum) could hardly acquire the lock for the table.\n\nHow about the following patch ?\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** storage/lmgr/lock.c.orig\tWed Apr 28 10:44:52 1999\n--- storage/lmgr/lock.c\tWed Apr 28 12:00:14 1999\n***************\n*** 815,821 ****\n \t/*\n \t * We can control runtime this option. Default is lockReadPriority=0\n \t */\n! \tif (!lockReadPriority)\n \t{\n \t\t/* ------------------------\n \t\t * If someone with a greater priority is waiting for the lock,\n--- 815,821 ----\n \t/*\n \t * We can control runtime this option. Default is lockReadPriority=0\n \t */\n! \tif ((!result->nHolding) && (!lockReadPriority))\n \t{\n \t\t/* ------------------------\n \t\t * If someone with a greater priority is waiting for the lock,\n\n", "msg_date": "Wed, 28 Apr 1999 18:57:52 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > However, I'm not sure that proposed changes will help in the next case:\n> >\n> > session-1 => begin;\n> > session-1 => insert into tt values (1); --RowExclusiveLock\n> >\n> > session-2 => begin;\n> > session-2 => insert into tt values (2); --RowExclusiveLock\n> >\n> > session-3 => begin;\n> > session-3 => lock table tt; --AccessExclusiveLock\n> > (conflicts with 1 & 2)\n> > ^\n> > session-1 => lock table tt in share mode; --ShareLock\n> > (conflicts with 2 & 3)\n> > ^\n> > This is deadlock situation and must be handled by\n> > DeadLockCheck().\n> >\n> \n> It's really a deadlock ?\n> Certainly end/abort of session-2 doesn't wakeup session-1/session3.\n\nYou're right again. \nFirst, I propose the next changes in LockResolveConflicts():\n\nif someone is waiting for lock then we must not take them \ninto account (and skip to check for conflicts with lock\nholders) if\n\n1. we already has lock with >= priority (currently, more\n restrictive locks have greater priority);\nor\n2. lock requested doesn't conflict with lock of any waiters;\nor\n3. conflicting waiter is waiting for us: its lock conflicts\n with locks we already hold, if any.\n\nI foresee that we also will have to change lock queue ordering\ncode but I have to think about it more.\n\nVadim\n", "msg_date": "Wed, 28 Apr 1999 18:55:54 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > if we already have some lock with priority X and new requested\n> > lock has priority Y, Y <= X, then lock must be granted.\n> >\n> > Also, I would get rid of lockReadPriority stuff...\n> >\n> \n> I found a problem to get rid of lockReadPriority stuff completely.\n> If there's a table which is insert/update/deleted very frequenly by\n> several processes,processes which request the high priority lock\n> (such as vacuum) could hardly acquire the lock for the table.\n\nI didn't mean to get rid of code checking waiter locks completely.\nI just said that condition below\n\n if (!lockReadPriority)\n\nis unuseful any more.\n\nRead my prev letter when, imo, we have to take waiters into\naccount.\n\nVadim\n", "msg_date": "Wed, 28 Apr 1999 19:01:56 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Wednesday, April 28, 1999 7:56 PM\n> To: Hiroshi Inoue\n> Cc: Bruce Momjian; [email protected]\n> Subject: Re: [HACKERS] Lock freeze ? in MVCC\n> \n> \n> Hiroshi Inoue wrote:\n> >\n\n[snip] \n\n> > > This is deadlock situation and must be handled by\n> > > DeadLockCheck().\n> > >\n> > \n> > It's really a deadlock ?\n> > Certainly end/abort of session-2 doesn't wakeup session-1/session3.\n> \n> You're right again. \n> First, I propose the next changes in LockResolveConflicts():\n> \n> if someone is waiting for lock then we must not take them \n> into account (and skip to check for conflicts with lock\n> holders) if\n> \n> 1. we already has lock with >= priority (currently, more\n> restrictive locks have greater priority);\n> or\n> 2. lock requested doesn't conflict with lock of any waiters;\n\nDoes this mean that the lock has a low priority ? If so,this \nstate(2.) is hardly changed. When this waiter is wakeupd ? \n\n> or\n> 3. conflicting waiter is waiting for us: its lock conflicts\n> with locks we already hold, if any.\n>\n> I foresee that we also will have to change lock queue ordering\n> code but I have to think about it more.\n>\n\nDo you say about the following stuff in ProcSleep() ?\n\n\tproc = (PROC *) MAKE_PTR(waitQueue->links.prev);\n\n\t/* If we are a reader, and they are writers, skip past them */\n\tfor (i = 0; i < waitQueue->size && proc->prio > prio; i++)\n\t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n\n\t/* The rest of the queue is FIFO, with readers first, writers last */\n\tfor (; i < waitQueue->size && proc->prio <= prio; i++)\n\t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n\nSeems above logic is only for 2 levels of priority(READ/WRITE).\nBut it's difficult for me to propose a different design for this.\n\nThanks.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 29 Apr 1999 14:35:50 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "> Do you say about the following stuff in ProcSleep() ?\n> \n> \tproc = (PROC *) MAKE_PTR(waitQueue->links.prev);\n> \n> \t/* If we are a reader, and they are writers, skip past them */\n> \tfor (i = 0; i < waitQueue->size && proc->prio > prio; i++)\n> \t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n> \n> \t/* The rest of the queue is FIFO, with readers first, writers last */\n> \tfor (; i < waitQueue->size && proc->prio <= prio; i++)\n> \t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n> \n> Seems above logic is only for 2 levels of priority(READ/WRITE).\n> But it's difficult for me to propose a different design for this.\n\nI think this is a classic priority inversion problem. If the process\nholds a lock and is going for another, but their is a higher priority\nprocess waiting for the lock, we have to consider that if we go to\nsleep, all people waiting on the lock will have to wait for me to\ncomplete in that queue, so we can either never have a process that\nalready holds any lock from being superceeded by a higher-priority\nsleeping process, or we need to check the priority of all processes\nwaiting on _our_ locks and check when pulling stuff out of the lock\nqueue because someone of high priority could come while I am in the\nqueue waiting.\n\nMy recommendation is to not have this go-to-end of queue if there is\nsomeone higher if I already hold _any_ kind of lock. I can easily make\nthat change if others agree.\n\nIt makes the code your are questioning active ONLY if I already don't\nhave some kind of lock. This may be the most efficient way to do\nthings, and may not lock things up like you have seen.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Apr 1999 02:01:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> >\n> > if someone is waiting for lock then we must not take them\n> > into account (and skip to check for conflicts with lock\n> > holders) if\n> >\n> > 1. we already has lock with >= priority (currently, more\n> > restrictive locks have greater priority);\n> > or\n> > 2. lock requested doesn't conflict with lock of any waiters;\n> \n> Does this mean that the lock has a low priority ? If so,this\n\nYes and no -:)\n\nIf we acquire ShareLock (prio 4) and someone with RowShareLock (2)\nis waiting then this means that table is locked in ExclusiveLock\n(or AccessExclusiveLock) mode and we'll going to sleep after\nlock-holder conflict test (so, we could go to sleep just after \nseeing that our prio is higher, without lock-holder conflict test).\n\nIf we acquire RowShareLock and someone with ShareLock is\nwaiting due to table is locked in RowExclusiveLock mode then\nwe are allowed to continue: ShareLock waiter will be wakeupd\nafter releasing of RowExclusiveLock and we don't conflict\nwith any of these lock mode.\n\n> state(2.) is hardly changed. When this waiter is wakeupd ?\n> \n> > or\n> > 3. conflicting waiter is waiting for us: its lock conflicts\n> > with locks we already hold, if any.\n> >\n> > I foresee that we also will have to change lock queue ordering\n> > code but I have to think about it more.\n> >\n> \n> Do you say about the following stuff in ProcSleep() ?\n> \n> proc = (PROC *) MAKE_PTR(waitQueue->links.prev);\n> \n> /* If we are a reader, and they are writers, skip past them */\n> for (i = 0; i < waitQueue->size && proc->prio > prio; i++)\n> proc = (PROC *) MAKE_PTR(proc->links.prev);\n> \n> /* The rest of the queue is FIFO, with readers first, writers last */\n> for (; i < waitQueue->size && proc->prio <= prio; i++)\n> proc = (PROC *) MAKE_PTR(proc->links.prev);\n> \n> Seems above logic is only for 2 levels of priority(READ/WRITE).\n> But it's difficult for me to propose a different design for this.\n\nYes. I'm not sure how useful is priority logic now.\n\nKeep thinking...\n\nVadim\n", "msg_date": "Thu, 29 Apr 1999 19:22:25 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Lock freeze ? in MVCC" } ]
[ { "msg_contents": "I'm using the 990329 snap shot.\n\nI was doing this query...\nSELECT story.approved, story.oid FROM story, webuser, category* WHERE\nstory.webuser = webuser.oid AND story.category = category.oid;\napproved| oid\n--------+------\nf |181760\nt |179383\nt |179384\nt |179385\nt |179386\nt |179387\n(6 rows)\n\nSo far, so good. \"approved\" is a boolean field. Now I try...\n\nSELECT story.approved, story.oid FROM story, webuser, category* WHERE\nstory.webuser = webuser.oid AND story.category = category.oid and\napproved;\napproved|oid\n--------+---\n(0 rows)\n\nYow! I expect 5 of the above 6 rows, but now I get none!\nI tried simplifying the SELECT to only join 2 of the above tables but\nthe problem didn't arise.\n\nHas anything been fixed recently that might account for this? If not, is\nthere anyway I can help you guys to get it fixed?\n", "msg_date": "Tue, 27 Apr 1999 05:17:03 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "HSavage bug in Postgresql beta?" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> SELECT story.approved, story.oid FROM story, webuser, category* WHERE\n> story.webuser = webuser.oid AND story.category = category.oid and\n> approved;\n> [ fails to find tuples it should find ]\n\nYouch. I could not duplicate that here on a toy example, which may\nmean there is a recently-fixed bug, or it may just mean that there\nare additional conditions required to trigger the bug.\n\nWhat does EXPLAIN say about the plans used for the two queries?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Apr 1999 10:25:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HSavage bug in Postgresql beta? " }, { "msg_contents": "Well I destroyed the database and recreated it, and the problem didn't\nhappen any more. So unfortunately I can't reproduce it now.\n\nI can tell you that the \"approved\" field that was causing the problem\nwas added with an ALTER TABLE ADD COLUMN statement. That's the most\n\"unusual\" thing about the situation. Maybe I'll just have to put it down\nto an aberation :-()\n\n\n\nTom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > SELECT story.approved, story.oid FROM story, webuser, category* WHERE\n> > story.webuser = webuser.oid AND story.category = category.oid and\n> > approved;\n> > [ fails to find tuples it should find ]\n> \n> Youch. I could not duplicate that here on a toy example, which may\n> mean there is a recently-fixed bug, or it may just mean that there\n> are additional conditions required to trigger the bug.\n> \n> What does EXPLAIN say about the plans used for the two queries?\n> \n> regards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 00:39:49 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HSavage bug in Postgresql beta?" } ]
[ { "msg_contents": "I thought numeric data type on 6.5 allows a very large precision. Am I \nmissing something?\n--\nTatsuo Ishii\n\ntest=> create table t1(n numeric(100,0));\nCREATE\ntest=> \\d t1;\nTable = t1\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| n | numeric | var |\n+----------------------------------+----------------------------------+-------+\ntest=> insert into t1 values(100000000000000000000000000000);\nNOTICE: Integer input '100000000000000000000000000000' is out of range; promoted to float\nINSERT 149033 1\ntest=> select * from t1;\nn\n-\n1\n(1 row)\n", "msg_date": "Tue, 27 Apr 1999 18:10:16 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "numeric data type on 6.5" }, { "msg_contents": "Thus spake Tatsuo Ishii\n> I thought numeric data type on 6.5 allows a very large precision. Am I \n> missing something?\n[...]\n> test=> insert into t1 values(100000000000000000000000000000);\n> NOTICE: Integer input '100000000000000000000000000000' is out of range; promoted to float\n\nTry this.\ninsert into t1 values('100000000000000000000000000000'::numeric);\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 27 Apr 1999 05:37:26 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": ">> I thought numeric data type on 6.5 allows a very large precision. Am I \n>> missing something?\n>[...]\n>> test=> insert into t1 values(100000000000000000000000000000);\n>> NOTICE: Integer input '100000000000000000000000000000' is out of range; promoted to float\n>\n>Try this.\n>insert into t1 values('100000000000000000000000000000'::numeric);\n\nThanks. It definitely works!\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Apr 1999 18:48:04 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric data type on 6.5 " }, { "msg_contents": ">\n> >> I thought numeric data type on 6.5 allows a very large precision. Am I\n> >> missing something?\n> >[...]\n> >> test=> insert into t1 values(100000000000000000000000000000);\n> >> NOTICE: Integer input '100000000000000000000000000000' is out of range; promoted to float\n> >\n> >Try this.\n> >insert into t1 values('100000000000000000000000000000'::numeric);\n>\n> Thanks. It definitely works!\n\n insert into t1 values('100000000000000000000000000000');\n\n\n That one too.\n\n The problem is that the yacc parser already tries to convert\n it into an integer or float if you omit the quotes. I'll try\n to implement a NUMERIC fallback for this case for 6.6 and\n then have all the auto conversion functionality so NUMERIC,\n INTEGER and FLOAT can be used mixed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Apr 1999 12:05:49 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "> The problem is that the yacc parser already tries to convert\n> it into an integer or float if you omit the quotes. I'll try\n> to implement a NUMERIC fallback for this case for 6.6 and\n> then have all the auto conversion functionality so NUMERIC,\n> INTEGER and FLOAT can be used mixed.\n\nI'm looking at this right now. I had coded in a fallback to FLOAT8 for\nthe integer types because at the time that was the only other useful\nnumeric type. However, I'm going to try changing the code to leave a\nfailed INTx token as a string of unspecified type, which would be\ntyped and converted later using the automatic coersion mechanism.\n\nistm that this would be a no-brainer for v6.5, since it is just\nreplacing one heuristic with a more general and more correct one. And\nI had implemented both, so we know who to blame :)\n\nWill let y'all know how it goes...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Apr 1999 14:28:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I'm looking at this right now. I had coded in a fallback to FLOAT8 for\n> the integer types because at the time that was the only other useful\n> numeric type. However, I'm going to try changing the code to leave a\n> failed INTx token as a string of unspecified type, which would be\n> typed and converted later using the automatic coersion mechanism.\n\nThat would be good as far as it goes, but what about cases with a\ndecimal point in 'em? Converting to float and then to numeric will\nlose precision.\n\nI'm inclined to think you should prevent the parser from converting\n*any* numeric constant out of string form until it knows the target data\ntype.\n\n(IIRC, INT8 has problems similar to NUMERIC's...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Apr 1999 10:59:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5 " }, { "msg_contents": "> > The problem is that the yacc parser already tries to convert\n> > it into an integer or float if you omit the quotes.\n> ... I'm going to try changing the code to leave a\n> failed INTx token as a string of unspecified type, which would be\n> typed and converted later using the automatic coersion mechanism.\n\nOK, this seems to work:\n\npostgres=> create table t1 (n numeric(20,0));\nCREATE\npostgres=> insert into t1 values ('10000000000000000000');\nINSERT 18552 1\npostgres=> insert into t1 values (20000000000000000000);\nINSERT 18553 1\npostgres=> select * from t1;\n n\n--------------------\n10000000000000000000\n20000000000000000000\n\npostgres=> select n * 5000000000000000000000000000000 from t1;\n---------------------------------------------------\n 50000000000000000000000000000000000000000000000000\n100000000000000000000000000000000000000000000000000\n\nBut, there are some cases which aren't transparent:\n\npostgres=> select 10000000000000000000000000*2;\nERROR: pg_atoi: error reading \"10000000000000000000000000\": Numerical\nresult out of range\npostgres=> select 10000000000000000000000000*2::numeric;\n--------------------------\n20000000000000000000000000\n\nAnd, if a long numeric string is entered, it actually stays a string\nall the way through (never being converted to anything internal):\n\npostgres=> select 400000000000000000000000000000000000000000000000000;\n---------------------------------------------------\n400000000000000000000000000000000000000000000000000\n\nComments?\n\nbtw, I've got some float8->numeric conversion troubles on my\ni686/Linux box:\n\npostgres=> insert into t1 values ('30000000000000000000'::float8);\nINSERT 18541 1\npostgres=> select * from t1;\n n\n------------------\n 3\n\nAny ideas on this last one?\n\nI'm running from this morning's development tree...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 28 Apr 1999 03:25:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "> btw, I've got some float8->numeric conversion troubles on my\n> i686/Linux box:\n> postgres=> insert into t1 values ('30000000000000000000'::float8);\n> INSERT 18541 1\n> postgres=> select * from t1;\n> n\n> ------------------\n> 3\n\nOK, so the problem is that large floats are printed using exponential\nnotation, and the float8->numeric conversion routine uses the\nfloat8out() routine to convert to a string in preparation for\ningestion as a numeric type. I've modified my copy of float8_numeric()\nto instead print directly into a (large!) buffer using the \"%f\"\nspecifier, to ensure that the string is always compatible with the\nnumeric reader:\n\npostgres=> create table t1 (f float8, n numeric(20,2), d\ndecimal(20,2));\nCREATE\npostgres=> insert into t1 values (300.1);\nINSERT 18641 1\npostgres=> insert into t1 values (300000000000000000);\nINSERT 18642 1\npostgres=> update t1 set n = f, d = f;\nUPDATE 2\npostgres=> select * from t1;\nf | n| d\n-----+---------------------+---------------------\n300.1| 300.10| 300.10\n3e+17|300000000000000000.00|300000000000000000.00\n(2 rows)\n\nThe float8_numeric() code already had checked for NULL and NaN, so I\nthink this does not lose functionality. What do you think Jan? Should\nI make the change? Or is there another way??\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 29 Apr 1999 15:21:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> The float8_numeric() code already had checked for NULL and NaN, so I\n> think this does not lose functionality. What do you think Jan? Should\n> I make the change? Or is there another way??\n\n Think it's O.K. - commit the changes.\n\n The other way would be to enhance the NUMERIC input function\n to read exponential notation. But I wouldn't do this now\n because I've planned to someday implement NUMERIC again from\n scratch.\n\n The current implementation has a packed storage format and\n the arithmetic operations are based on a character format\n (each digit is stored in one byte). After thinking about it\n I discovered, that storing the value internally in short\n int's (16 bit) and base 10000 would have some advantages.\n\n 1. No need to pack/unpack storage format for computations.\n\n 2. One arithmetic operation in the innermost loops (only\n add/subtract are really implemented) mucks with 4 digits\n at a time.\n\n The disadvantages are small. Base 10000 to base 10 (decimal)\n conversion is easily to parse/print. Only rounding functions\n will be a little tricky. I think the speedup gained from\n adding/subtracting 4 digits per loop iteration will be worth\n the efford.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 29 Apr 1999 18:06:20 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "> > I'm looking at this right now. I had coded in a fallback to FLOAT8 for\n> > the integer types because at the time that was the only other useful\n> > numeric type. However, I'm going to try changing the code to leave a\n> > failed INTx token as a string of unspecified type, which would be\n> > typed and converted later using the automatic coersion mechanism.\n> That would be good as far as it goes, but what about cases with a\n> decimal point in 'em? Converting to float and then to numeric will\n> lose precision.\n> I'm inclined to think you should prevent the parser from converting\n> *any* numeric constant out of string form until it knows the target data\n> type.\n> (IIRC, INT8 has problems similar to NUMERIC's...)\n\nRight. Here is a patch which tries to do something right for most\ncases. For the \"integer\" token (numbers w/o a decimal point) it keeps\nthe token as a string if the conversion to int4 fails. I split the\n\"real\" token into \"decimal\" (w/o exponent) and \"real\"; at the moment\n\"decimal\" is forced to become a float8 if there are fewer than 18\ncharacters in the string, but there may be a more robust strategy to\nbe had.\n\nWhen a numeric token is kept as a string, the parser requires some\ntyping context to handle the string later, otherwise it will complain.\nBut that is probably better than silently swallowing numeric data and\npossibly mishandling it.\n\nSeems to do OK with numeric tokens of unspecified type which will\nbecome int8 and numeric in the parser. There may be some edge-effect\ncases (e.g. decimal data with 17 characters) which aren't quite right.\n\nComments?\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Tue, 04 May 1999 14:19:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "> > > I'm looking at this right now. I had coded in a fallback to FLOAT8 for\n> > > the integer types because at the time that was the only other useful\n> > > numeric type. However, I'm going to try changing the code to leave a\n> > > failed INTx token as a string of unspecified type, which would be\n> > > typed and converted later using the automatic coersion mechanism.\n> > That would be good as far as it goes, but what about cases with a\n> > decimal point in 'em? Converting to float and then to numeric will\n> > lose precision.\n> > I'm inclined to think you should prevent the parser from converting\n> > *any* numeric constant out of string form until it knows the target data\n> > type.\n> > (IIRC, INT8 has problems similar to NUMERIC's...)\n> \n> Right. Here is a patch which tries to do something right for most\n> cases. For the \"integer\" token (numbers w/o a decimal point) it keeps\n> the token as a string if the conversion to int4 fails. I split the\n> \"real\" token into \"decimal\" (w/o exponent) and \"real\"; at the moment\n> \"decimal\" is forced to become a float8 if there are fewer than 18\n> characters in the string, but there may be a more robust strategy to\n> be had.\n\nThis seems like a perfect approach.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 13:23:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Seems to do OK with numeric tokens of unspecified type which will\n> become int8 and numeric in the parser. There may be some edge-effect\n> cases (e.g. decimal data with 17 characters) which aren't quite right.\n> Comments?\n\nI'd suggest backing off one more place on the length of string you will\ntry to convert to a float8. Since the test is strlen() <= 17, you\nactually can have at most 16 digits (there must be a decimal point in\nthere too). But IEEE float is only good to 16-and-change digits; I'm\nnot sure I'd want to assume that the 16th digit will always be\nreproduced exactly. 15 digits would be safer.\n\nIt could still break if the C library's float<=>string conversion\nroutines are sloppy :-(. I suppose you're interested in preserving\nthe info that \"this constant looks numeric-ish\" to assist in type\nresolution heuristics? Otherwise the value could be left in string\nform till later.\n\nIs there any value in marking the constant as a numeric token, yet\nleaving its specific value as a string until after type resolution\nis done?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 10:56:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5 " }, { "msg_contents": "> > Seems to do OK with numeric tokens of unspecified type which will\n> > become int8 and numeric in the parser. There may be some edge-effect\n> > cases (e.g. decimal data with 17 characters) which aren't quite right.\n> > Comments?\n> I'd suggest backing off one more place on the length of string you will\n> try to convert to a float8. Since the test is strlen() <= 17, you\n> actually can have at most 16 digits (there must be a decimal point in\n> there too). But IEEE float is only good to 16-and-change digits; I'm\n> not sure I'd want to assume that the 16th digit will always be\n> reproduced exactly. 15 digits would be safer.\n\nYeah. I'd chosen 17 to get sign+decimal+15digits...\n\n> It could still break if the C library's float<=>string conversion\n> routines are sloppy :-(. I suppose you're interested in preserving\n> the info that \"this constant looks numeric-ish\" to assist in type\n> resolution heuristics? Otherwise the value could be left in string\n> form till later.\n> Is there any value in marking the constant as a numeric token, yet\n> leaving its specific value as a string until after type resolution\n> is done?\n\nPossibly. I didn't think too hard about it, but had assumed that doing\nmuch more than I did would propagate back into the parser, which I\ndidn't want to tackle this close to release.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 May 1999 06:32:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric data type on 6.5" } ]
[ { "msg_contents": "I'm not sure about the time out. I'll check the specs.\n\nTalking about 6.5 final, when is the expected date (assuming no show\nstopping bugs)? The last date I had heared was May 1st.\n\nI'm way behind on a few fixes in JDBC (work is taking up my time :-( ),\nand it would be easier to know of a date to target with.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Constantin Teodorescu [mailto:[email protected]]\nSent: Tuesday, April 27, 1999 10:13 AM\nTo: PostgreSQL Interfaces\nSubject: [INTERFACES] JDBC and waiting for commit on a locked table in\n6.4.2\n\n\nHello,\n\nI'm using JDBC PostgreSQL with 6.4.2 database.\n\nI'm using the new code style (conn.setAutoCommit(false)).\n\nWhen the transaction is waiting for another transaction to terminate, is\nthere a way to set a timeout period , let's say a maximum of 30 seconds\n?\nIf the lock still persist after 30 seconds the statement.upDate() to\nfail and to throw a SQLException?\n\nOf course, without the classical watchdog thread object.\n\nThanks in advance,\n\n(BTW : I tried the same thing on 6.5 beta 1 and my both transactions\nhave committed without problem. Nice work that MVCC. I'm watching daily\nthe development waiting for the big day of 6.5 final)\n\nBest regards,\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 27 Apr 1999 10:52:06 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] JDBC and waiting for commit on a locked table in\n\t6.4.2" }, { "msg_contents": "Peter Mount wrote:\n> \n> I'm not sure about the time out. I'll check the specs.\n> \n> Talking about 6.5 final, when is the expected date (assuming no show\n> stopping bugs)? The last date I had heared was May 1st.\n> \n> I'm way behind on a few fixes in JDBC (work is taking up my time :-( ),\n> and it would be easier to know of a date to target with.\n\nImho, Jun 1st is good date.\n\n?\n\nVadim\n", "msg_date": "Tue, 27 Apr 1999 18:52:41 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] JDBC and waiting for commit on a locked table\n in6.4.2" } ]
[ { "msg_contents": "I have commited a fix for _copyUnique(nodes/copyfuncs.c) suggested by\nHiroshi Inoue. It seems obvious for me and passed regression tests.\n--\nTatsuo Ishii\n\n*** backend/nodes/copyfuncs.c.orig\tMon Apr 19 16:00:40 1999\n--- backend/nodes/copyfuncs.c\tMon Apr 26 13:34:26 1999\n***************\n*** 514,520 ****\n \t *\tcopy remainder of node\n \t * ----------------\n \t */\n! \tif (newnode->uniqueAttr)\n \t\tnewnode->uniqueAttr = pstrdup(from->uniqueAttr);\n \telse\n \t\tnewnode->uniqueAttr = NULL;\n--- 514,520 ----\n \t *\tcopy remainder of node\n \t * ----------------\n \t */\n! \tif (from->uniqueAttr)\n \t\tnewnode->uniqueAttr = pstrdup(from->uniqueAttr);\n \telse\n \t\tnewnode->uniqueAttr = NULL;\n\n", "msg_date": "Tue, 27 Apr 1999 18:59:03 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "fix for _copyUnique()" } ]
[ { "msg_contents": "That would be good for me, as it would give me enough time to sort out\nthe 4 things I'd like to fix before 6.5 is out.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n\n-----Original Message-----\nFrom: Vadim Mikheev [mailto:[email protected]]\nSent: Tuesday, April 27, 1999 11:53 AM\nTo: Peter Mount\nCc: Constantin Teodorescu; PostgreSQL Interfaces; PostgreSQL Developers\nList\nSubject: Re: [INTERFACES] JDBC and waiting for commit on a locked table\nin6.4.2\n\n\nPeter Mount wrote:\n> \n> I'm not sure about the time out. I'll check the specs.\n> \n> Talking about 6.5 final, when is the expected date (assuming no show\n> stopping bugs)? The last date I had heared was May 1st.\n> \n> I'm way behind on a few fixes in JDBC (work is taking up my time :-(\n),\n> and it would be easier to know of a date to target with.\n\nImho, Jun 1st is good date.\n\n?\n\nVadim\n", "msg_date": "Tue, 27 Apr 1999 12:28:35 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] JDBC and waiting for commit on a locked table in\n\t6.4.2" } ]
[ { "msg_contents": "> Hi all,\n>\n> I'm trying numeric & decimal types in v6.5beta1 and I have two questions\n> about it.\n>\n> [...]\n>\n> Second question:\n> Why PostgreSQL allows to insert 14 digits into a numeric(5,1) ?\n>\n> create table test(\n> n numeric(10,3),\n> d decimal(5,1)\n> );\n\n For some reason (dunno why) the parser ignores the precision\n for DECIMAL. atttypmod is set hardcoded to -1. So the above\n is identical to a\n\n CREATE TABLE test (n numeric(10,3), d decimal);\n\n I'll test what happens if I enable it in gram.y and if it\n doesn't break any regression commit the changes.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Apr 1999 13:40:47 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "I wrote:\n\n> For some reason (dunno why) the parser ignores the precision\n> for DECIMAL. atttypmod is set hardcoded to -1. So the above\n> is identical to a\n>\n> CREATE TABLE test (n numeric(10,3), d decimal);\n>\n> I'll test what happens if I enable it in gram.y and if it\n> doesn't break any regression commit the changes.\n\n It doesn't and I did.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 27 Apr 1999 15:37:45 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "> For some reason (dunno why) the parser ignores the precision\n> for DECIMAL. atttypmod is set hardcoded to -1. So the above\n> is identical to a\n> \n> CREATE TABLE test (n numeric(10,3), d decimal);\n> \n> I'll test what happens if I enable it in gram.y and if it\n> doesn't break any regression commit the changes.\n\nIn the old days, we couldn't handle precision, so we ignored it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 12:21:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "Jan Wieck ha scritto:\n\n> > Hi all,\n> >\n> > I'm trying numeric & decimal types in v6.5beta1 and I have two questions\n> > about it.\n> >\n> > [...]\n> >\n> > Second question:\n> > Why PostgreSQL allows to insert 14 digits into a numeric(5,1) ?\n> >\n> > create table test(\n> > n numeric(10,3),\n> > d decimal(5,1)\n> > );\n>\n> For some reason (dunno why) the parser ignores the precision\n> for DECIMAL. atttypmod is set hardcoded to -1. So the above\n> is identical to a\n>\n> CREATE TABLE test (n numeric(10,3), d decimal);\n>\n> I'll test what happens if I enable it in gram.y and if it\n> doesn't break any regression commit the changes.\n>\n> Jan\n>\n\nGreat!\nI have other questions about NUMERICs:\n\n> create table test(\n> num0 numeric,\n> num1 numeric(1),\n> num4 numeric(4,1)\n> );\n> CREATE\n> insert into test values (11111111,11111111,-9,9,-999.99,-999.99);\n> INSERT 78190 1\n> select * from test;\n> num0|num1| num4\n> ---------------+----+-------\n> 11111111.000000| 9|-1000.0\n> ^^^^^^ ^^^^^^^\n\n- I don't understand this default:\n NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n Standard SQL92 says that NUMERIC without size is equivalent to NUMERIC(1)\n\n- NUMERIC(4,1) transalte value -999.99 as -1000.0 (greater than his size)\n\nComments?\n\nJos�\n\n\n\nJan Wieck ha scritto:\n> Hi all,\n>\n> I'm trying numeric & decimal types in v6.5beta1 and I have\ntwo questions\n> about it.\n>\n> [...]\n>\n> Second question:\n>     Why PostgreSQL allows to insert 14 digits\ninto a numeric(5,1) ?\n>\n> create table test(\n>         n numeric(10,3),\n>         d decimal(5,1)\n> );\n    For  some reason (dunno why) the parser\nignores the precision\n    for DECIMAL.  atttypmod is set hardcoded\nto -1.  So the above\n    is identical to a\n        CREATE TABLE test (n\nnumeric(10,3), d decimal);\n    I'll  test  what  happens \nif I enable it in gram.y and if it\n    doesn't break any regression commit the changes.\nJan\n \nGreat!\nI have other questions about NUMERICs:\ncreate table test(\n        num0 numeric,\n        num1 numeric(1),\n        num4 numeric(4,1)\n);\nCREATE\ninsert into test values (11111111,11111111,-9,9,-999.99,-999.99);\nINSERT 78190 1\nselect * from test;\n           num0|num1|  \nnum4\n---------------+----+-------\n11111111.000000|   9|-1000.0\n                 \n^^^^^^           \n^^^^^^^\n- I don't understand this default:\n  NUMERIC  without size is interpreted as NUMERIC(x,6). Why\n?\n  Standard SQL92 says that NUMERIC without size is equivalent\nto NUMERIC(1)\n- NUMERIC(4,1)  transalte value -999.99 as -1000.0 (greater than\nhis size)\nComments?\nJosé", "msg_date": "Wed, 28 Apr 1999 15:02:36 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "\nThis looks like something that should be addressed. Was it?\n\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Jan Wieck ha scritto:\n> \n> > > Hi all,\n> > >\n> > > I'm trying numeric & decimal types in v6.5beta1 and I have two questions\n> > > about it.\n> > >\n> > > [...]\n> > >\n> > > Second question:\n> > > Why PostgreSQL allows to insert 14 digits into a numeric(5,1) ?\n> > >\n> > > create table test(\n> > > n numeric(10,3),\n> > > d decimal(5,1)\n> > > );\n> >\n> > For some reason (dunno why) the parser ignores the precision\n> > for DECIMAL. atttypmod is set hardcoded to -1. So the above\n> > is identical to a\n> >\n> > CREATE TABLE test (n numeric(10,3), d decimal);\n> >\n> > I'll test what happens if I enable it in gram.y and if it\n> > doesn't break any regression commit the changes.\n> >\n> > Jan\n> >\n> \n> Great!\n> I have other questions about NUMERICs:\n> \n> > create table test(\n> > num0 numeric,\n> > num1 numeric(1),\n> > num4 numeric(4,1)\n> > );\n> > CREATE\n> > insert into test values (11111111,11111111,-9,9,-999.99,-999.99);\n> > INSERT 78190 1\n> > select * from test;\n> > num0|num1| num4\n> > ---------------+----+-------\n> > 11111111.000000| 9|-1000.0\n> > ^^^^^^ ^^^^^^^\n> \n> - I don't understand this default:\n> NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n> Standard SQL92 says that NUMERIC without size is equivalent to NUMERIC(1)\n> \n> - NUMERIC(4,1) transalte value -999.99 as -1000.0 (greater than his size)\n> \n> Comments?\n> \n> Jos_\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:37:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "Bruce Momjian wrote:\n>\n>\n> This looks like something that should be addressed. Was it?\n>\n>\n>\n> > > For some reason (dunno why) the parser ignores the precision\n> > > for DECIMAL. atttypmod is set hardcoded to -1. So the above\n> > > is identical to a\n> > >\n> > > CREATE TABLE test (n numeric(10,3), d decimal);\n> > >\n> > > I'll test what happens if I enable it in gram.y and if it\n> > > doesn't break any regression commit the changes.\n\n This one is fixed. Parser handles precision of DECIMAL\n already.\n\n> > NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n> > Standard SQL92 says that NUMERIC without size is equivalent to NUMERIC(1)\n\n PostgreSQL specific. Should I change it to standard?\n\n> >\n> > - NUMERIC(4,1) transalte value -999.99 as -1000.0 (greater than his size)\n\n Definitely a bug. The value is checked before the rounding.\n Will fix it soon.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 18:51:02 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "> > > NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n> > > Standard SQL92 says that NUMERIC without size is equivalent\n> > > to NUMERIC(1)\n> PostgreSQL specific. Should I change it to standard?\n\nThe standard (per Date's book) is:\n\n NUMERIC == NUMERIC(p), where p is implementation-defined.\n NUMERIC(p) == NUMERIC(p,0)\n\nDate also explicitly says that:\n\n \"The following are implementation-defined:\n ...\n o The default precision for NUMERIC and DECIMAL if there is no\n declared precision\n ...\"\n\nSo where did NUMERIC(1) come from? afaict Jan should use what he feels\nare reasonable values...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 10 May 1999 17:58:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": ">\n> > > > NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n> > > > Standard SQL92 says that NUMERIC without size is equivalent\n> > > > to NUMERIC(1)\n> > PostgreSQL specific. Should I change it to standard?\n>\n> The standard (per Date's book) is:\n>\n> NUMERIC == NUMERIC(p), where p is implementation-defined.\n> NUMERIC(p) == NUMERIC(p,0)\n>\n> Date also explicitly says that:\n>\n> \"The following are implementation-defined:\n> ...\n> o The default precision for NUMERIC and DECIMAL if there is no\n> declared precision\n> ...\"\n>\n> So where did NUMERIC(1) come from? afaict Jan should use what he feels\n> are reasonable values...\n\n The default for NUMERIC is NUMERIC(30,6). NUMERIC(n) is\n treated as NUMERIC(n,0). So it is exactly as Date says and\n since it is already released, nothing to get changed -\n period.\n\n If someone want's his installation to act different, the\n place to do it is include/numeric.h where\n NUMERIC_DEFAULT_PRECISION and NUMERIC_DEFAULT_SCALE define\n the two values.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 20:27:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "\nCool, item removed.\n\n> > > > NUMERIC without size is interpreted as NUMERIC(x,6). Why ?\n> > > > Standard SQL92 says that NUMERIC without size is equivalent\n> > > > to NUMERIC(1)\n> > PostgreSQL specific. Should I change it to standard?\n> \n> The standard (per Date's book) is:\n> \n> NUMERIC == NUMERIC(p), where p is implementation-defined.\n> NUMERIC(p) == NUMERIC(p,0)\n> \n> Date also explicitly says that:\n> \n> \"The following are implementation-defined:\n> ...\n> o The default precision for NUMERIC and DECIMAL if there is no\n> declared precision\n> ...\"\n> \n> So where did NUMERIC(1) come from? afaict Jan should use what he feels\n> are reasonable values...\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 14:38:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "> > > - NUMERIC(4,1) transalte value -999.99 as -1000.0 (greater than his size)\n>\n> Definitely a bug. The value is checked before the rounding.\n> Will fix it soon.\n\n Fixed - value is now checked again after rounding.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 10 May 1999 20:43:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] numeric & decimal" }, { "msg_contents": "> > NUMERIC == NUMERIC(p), where p is implementation-defined.\n> > NUMERIC(p) == NUMERIC(p,0)\n> > \"The following are implementation-defined:\n> > o The default precision for NUMERIC and DECIMAL if there is no\n> > declared precision\n> The default for NUMERIC is NUMERIC(30,6). NUMERIC(n) is\n> treated as NUMERIC(n,0). So it is exactly as Date says and\n> since it is already released, nothing to get changed -\n> period.\n\nI may be misinterpreting Date's synopsis, but I believe that the\ndefault decimal location should be zero, rather than 6. The\n\"precision\" terminology is from SQL92, and refers to the total number\nof digits, not the position of the decimal point (as one might\nreasonably expect from the usual usage of the word).\n\nImplementation flexibility is allowed in the default total number of\ndigits, not the default location of the decimal point.\n\nRegards.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 11 May 1999 03:03:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] numeric & decimal" } ]
[ { "msg_contents": "Sorry for the repost, but I realized that the subject did not reflect\nwhat the problem was.\n\n***\nTRAP: Bad Argument to Function Call(\"!(AllocSetContains(set,\npointer)):\", File:\n\"aset.c\", Line: 292)\n\n!(AllocSetContains(set, pointer)) (0) [No such file or directory]\n***\n\n(ultra5, Solaris 7, cc: WorkShop Compilers 5.0 98/12/15 C 5.0, jdk1.2.1)\n\nI've configured the pgsql as:\n\n./configure --prefix=/opt/pgsql \\\n --with-tcl \\\n --with-includes=/opt/tcl_tk/include \\\n --with-tclconfig=/opt/tcl_tk/lib \\\n --with-template=solaris_sparc_cc \\\n --with-CC=cc \\\n --enable-cassert \\\n --with-perl\n\nThen started postmaster as:\n su bpm -c \"${PGSQLHOME}/bin/postmaster -i -d -D ${PGDATA} 2>&1 >\n${PGDATA}/trace.log\"\n\nThen ran a Java application to retrieve blobs from a database that I\ncreated and\npopulated. Table def is:\nCREATE TABLE item (item_num int PRIMARY KEY, item_picture oid,\nitem_descr text,\nship_unit varchar(15), unit_price money, stock int)\n\npsql yields:\n\nmini_stores=> select * from item;\nitem_num|item_picture|item_descr\n|ship_unit |unit_price|stock\n--------+------------+--------------------------------------------------------------+----------+----------+-----\n\n 1| 18730|Maximum protection for high-mileage\nrunners |pair |$75.50 | 1000\n 2| 18745|Customize your mountain bike with extra-durable\ncrankset |each |$20.00 | 500\n 3| 18762|Long drive golf balls -fluorescent\nyellow |pack of 12|$50.00 | 200\n 4| 18780|Your first season's baseball\nglove |pair |$25.00 | 250\n 5| 18796|Minimum chin contact, feather-light, maximum\nprotection\nhelmet|each |$35.50 | 50\n(5 rows)\n\n\nThe Java app is giving errors, so I compiled pgsql with ASSERT_CHECKING\nenabled,\nbut I get the following error:\n\nTRAP: Bad Argument to Function Call(\"!(AllocSetContains(set,\npointer)):\", File:\n\"aset.c\", Line: 292)\n\n!(AllocSetContains(set, pointer)) (0) [No such file or directory]\n/opt/pgsql/bin/postmaster: reaping dead processes...\n/opt/pgsql/bin/postmaster: CleanupProc: pid 27783 exited with status 134\n\n/opt/pgsql/bin/postmaster: CleanupProc: reinitializing shared memory and\n\nsemaphores\n\n\nDo I need to configure something else?\n\nThe error, 134 is\nvlad: checkERR 132\n#define ENOBUFS 132 /* No buffer space available */\n\nThanks.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Tue, 27 Apr 1999 08:49:06 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "TRAP: Bad Argument to Function Call" } ]
[ { "msg_contents": "Hello,\n\nthis night we discovered here a strange behaviour on our servers. Somebody\nmanaged to get access to the UNIX shell using the 'postgres' db\nadministrator account. He logged in some machines with a single try ! The\npassword was not part of any dictionary. He tried some other accounts,\nwithout success. Under the user postgres he installed an 'eggdrop' program\non the machine, implementing an IRC server.\n\nIf you want to look on your servers, look for an \".elm/...\" directory in\nthe postgres home directory. You may discover too some processes named\n\"./...\" or \"../ -m\" running under the postgres user.\n\nIs there any chanche, that the postgres database contains a bug giving\nshell access ? Is there any chance to trace what happens on the postgres\nport ?\n\nMatthias Schmitt\n------------------------------------------------------------------\nMatthias Schmitt\nmagic moving pixel s.a. Phone: +352 54 75 75 - 0\nTechnoport Schlassgoart Fax : +352 54 75 75 - 54\n66, rue de Luxembourg URL : http://www.mmp.lu\nL-4221 Esch-sur-Alzette Email: [email protected]\n", "msg_date": "Tue, 27 Apr 1999 18:22:33 +0200", "msg_from": "Matthias Schmitt <[email protected]>", "msg_from_op": true, "msg_subject": "Hacker found bug in Postgres ?" }, { "msg_contents": "> Hello,\n> \n> this night we discovered here a strange behaviour on our servers. Somebody\n> managed to get access to the UNIX shell using the 'postgres' db\n> administrator account. He logged in some machines with a single try ! The\n> password was not part of any dictionary. He tried some other accounts,\n> without success. Under the user postgres he installed an 'eggdrop' program\n> on the machine, implementing an IRC server.\n> \n> If you want to look on your servers, look for an \".elm/...\" directory in\n> the postgres home directory. You may discover too some processes named\n> \"./...\" or \"../ -m\" running under the postgres user.\n> \n> Is there any chanche, that the postgres database contains a bug giving\n> shell access ? Is there any chance to trace what happens on the postgres\n> port ?\n\nObviously a serious issue here.\n\nThis is the first time in 2.8 years I have heard any security\nproblem reported about PostgreSQL. There may be some problem, but I\nknow of no known security problems. Because PostgreSQL is\nclient/server, client processes run as normal users, and the backends\nrun as postgres, and there is no way I know of for a normal user to have\na backend run arbitrary code as the postgres user.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Apr 1999 12:35:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ?" }, { "msg_contents": "On Tue, 27 Apr 1999, Matthias Schmitt wrote:\n\n> Hello,\n> \n> this night we discovered here a strange behaviour on our servers. Somebody\n> managed to get access to the UNIX shell using the 'postgres' db\n> administrator account. He logged in some machines with a single try ! The\n> password was not part of any dictionary. He tried some other accounts,\n> without success. Under the user postgres he installed an 'eggdrop' program\n> on the machine, implementing an IRC server.\n> \n> If you want to look on your servers, look for an \".elm/...\" directory in\n> the postgres home directory. You may discover too some processes named\n> \"./...\" or \"../ -m\" running under the postgres user.\n> \n> Is there any chanche, that the postgres database contains a bug giving\n> shell access ? Is there any chance to trace what happens on the postgres\n> port ?\n\nIs it possible the intruder guessed the password on the postgres\nadministrator's account? Or perhaps a script run via mail?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 27 Apr 1999 13:00:04 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ?" }, { "msg_contents": ">Is it possible the intruder guessed the password on the postgres\n>administrator's account?\n\nNo, I don't think so. The password was really complicated and had special\ncharacters inside of it. Its content had nothing to do with us or the\npostgres database.\n\n>Or perhaps a script run via mail?\n\nWe have nothing installed, which would enable a behaviour like this, at\nleast not as far as we know. I cannot imagine, that something like this is\npart of a standard Linux distribution.\n\nMatthias\n------------------------------------------------------------------\nMatthias Schmitt\nmagic moving pixel s.a. Phone: +352 54 75 75 - 0\nTechnoport Schlassgoart Fax : +352 54 75 75 - 54\n66, rue de Luxembourg URL : http://www.mmp.lu\nL-4221 Esch-sur-Alzette Email: [email protected]\n", "msg_date": "Tue, 27 Apr 1999 19:07:10 +0200", "msg_from": "Matthias Schmitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ?" }, { "msg_contents": "Matthias Schmitt <[email protected]> writes:\n> this night we discovered here a strange behaviour on our servers. Somebody\n> managed to get access to the UNIX shell using the 'postgres' db\n> administrator account. He logged in some machines with a single try !\n\nUgh. Depressing news, if accurate. But you should not rule out the\npossibility that the security failure was elsewhere.\n\nWhat version of Postgres are you running? (6.4 and later are inherently\nmore secure than prior releases, since they don't do an exec() while\nforking a backend server process.)\n\nAfter a few minutes' thought, the only attack paths that come to mind\nrequire access to postgres superuser rights. (For example, \"COPY TO\nfilename\" could potentially overwrite any file writable by the postgres\nuserid, but that operation is only allowed to a database user who's\nlogged in as the postgres superuser.) Do you have access permissions\nset up to ensure that an unguessable password must be supplied to\nlog into Postgres as superuser?\n\nAs a short-term defense until you know exactly what happened, I'd\nsuggest modifying Postgres' pg_hba.conf file to restrict access\nas much as possible. In particular the Postgres superuser should\nonly be allowed to log in from trustworthy local machines.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 02:00:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ? " } ]
[ { "msg_contents": ">Is it possible the intruder guessed the password on the postgres\n>administrator's account?\n\nNo, I don't think so. The password was really complicated and had special\ncharacters inside of it. Its content had nothing to do with us or the\npostgres database.\n\n>Or perhaps a script run via mail?\n\nWe have nothing installed, which would enable a behaviour like this, at\nleast not as far as we know. I cannot imagine, that something like this is\npart of a standard Linux distribution.\n\nMatthias\n------------------------------------------------------------------\nMatthias Schmitt\nmagic moving pixel s.a. Phone: +352 54 75 75 - 0\nTechnoport Schlassgoart Fax : +352 54 75 75 - 54\n66, rue de Luxembourg URL : http://www.mmp.lu\nL-4221 Esch-sur-Alzette Email: [email protected]\n", "msg_date": "Tue, 27 Apr 1999 19:09:55 +0200", "msg_from": "Matthias Schmitt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ?" } ]
[ { "msg_contents": "I refreshed my 6.5 source last night (last time was early April). This had\nsome patches to case and coalesce that I needed. This is working great\nexcept for the following two areas:\n\n1)\tThe \\g option in pgsql is failing. I have been using this technique\nfor months to create a script file from and SQL statement and then execute\nthe file. Here is the steps that worked before I pulled down code last\nnight:\n\n\t\t\t\\g sip_revoke_all.sql\n\t\t\tunexpected character Z following 'I'\n\n\t\t\tOnce the above error occurs, the script will hang\nuntil I manually break (ctrl-c).\n\n2)\tMy Access97 application using ODBC can't connect to the database.\nNothing has changed other than rebuilding the source. In attempts to fix\nthis, I and destroyed my databases, rm $PGDATA, and performed initdb. I have\nverified that pg_hba.conf is correct before and after recreating my\ndatabases. The Psqlodbc.log log file is below. I can log onto the\ndatabase using psql on my linux workstation just fine. \n\nHas anything changed in the code recently (within the past 3 weeks) that\ncould have had some effect these two conditions? I am at a complete stand\nstill until I can get my ODBC issue resolved. I can work around item 1.\n\nThanks Michael\n\nPsqlodbc.log file:\n\nconn=159330904, SQLDriverConnect( in)='DSN=PostgreSQL;', fDriverCompletion=1\nDSN info:\nDSN='PostgreSQL',server='192.168.97.2',port='5432',dbase='mp',user='postgres\n',passwd=''\n \nreadonly='0',protocol='6.4',showoid='0',fakeoidindex='0',showsystable='0'\n conn_settings=''\n translation_dll='',translation_option=''\nGlobal Options: Version='06.40.0004', fetch=100, socket=4096,\nunknown_sizes=0, max_varchar_size=254, max_longvarchar_size=4094\n disable_optimizer=1, ksqo=1, unique_index=1,\nuse_declarefetch=1\n text_as_longvarchar=1, unknowns_as_longvarchar=0,\nbools_as_char=1\n extra_systable_prefixes='dd_;', conn_settings=''\nconn=159330904, query=' '\nCONN ERROR: func=SQLDriverConnect, desc='Error from CC_Connect', errnum=105,\nerrmsg='The database does not exist on the server\nor user authentication failed.'\n ------------------------------------------------------------\n henv=159330888, conn=159330904, status=0, num_stmts=16\n sock=159337240, stmts=159345496, lobj_type=-999\n ---------------- Socket Info -------------------------------\n socket=360, reverse=0, errornumber=0, errormsg='(null)'\n buffer_in=159337288, buffer_out=159341392\n buffer_filled_in=2, buffer_filled_out=0, buffer_read_in=2\n\n", "msg_date": "Tue, 27 Apr 1999 16:31:08 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Issues with the latest 6.5 source" }, { "msg_contents": "Michael J Davis <[email protected]> writes:\n> 1)\tThe \\g option in pgsql is failing.\n\nI'll look at that this evening --- I was fooling with psql's command\nparsing a couple days ago, might have busted it.\n\n> 2)\tMy Access97 application using ODBC can't connect to the database.\n\nDunno about this one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 10:02:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Issues with the latest 6.5 source " }, { "msg_contents": "Michael J Davis <[email protected]> writes:\n> 1)\tThe \\g option in pgsql is failing.\n\nFixed. I had managed to break the empty-query response protocol on\nSunday --- odd that none of the regression tests detected this. Sigh.\n\n> 2)\tMy Access97 application using ODBC can't connect to the database.\n\nI now have a strong suspicion that this is caused by the same goof.\nODBC might issue an empty query during startup. (That used to be a\nnecessary part of the startup protocol; it isn't anymore, but ODBC\nvery possibly hasn't been changed.)\n\nIf you don't have CVS access and don't want to wait for tonight's\nsnapshot, here is the patch:\n\n*** src/backend/tcop/dest.c~\tWed Apr 28 18:15:07 1999\n--- src/backend/tcop/dest.c\tWed Apr 28 18:15:45 1999\n***************\n*** 336,342 ****\n \t\t\t *\t\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\", 1);\n \t\t\tbreak;\n \n \t\tcase Local:\n--- 336,342 ----\n \t\t\t *\t\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\", 2); /* note we send I and \\0 */\n \t\t\tbreak;\n \n \t\tcase Local:\n\n\nLet us know if that helps...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 18:29:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Issues with the latest 6.5 source " } ]
[ { "msg_contents": "Hi,\n\nWe are using LabVIEW 3.1.1 on a Windows 3.1 PC computer (and we didn't\nplaned to upgrade to LabVIEW 5.1). We are ready to buy the \"SQL Toolkit\"\n(that will work for this configuration) and it's licence from anyone as N.I\nis no longer able to sell it !\n\nIf you can help us, please tell me : Jeremie.Voix(a)gme.usherb.ca\n\n\n\n", "msg_date": "Tue, 27 Apr 1999 17:39:02 -0400", "msg_from": "\"J.V.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for an old LabVIEW SQL toolkit licence..." } ]
[ { "msg_contents": "I am getting the following error from a CREATE RULE statement:\n\n\tERROR: DefineQueryRewrite: rule plan string too big.\n \nThis statement is only 377 characters in length. What are the limits\nto the size of rules? Are they changable at compile time? Is this\nthe same as a \"normal\" tuple (i.e., 8K)? If so, how do 377 characters\nexpand to over 8K?\n\nThanks for your help.\n\nCheers,\nBrook\n", "msg_date": "Tue, 27 Apr 1999 22:32:59 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "rule plan string too big?" } ]
[ { "msg_contents": "I have created a table/view/rule combination (see script below) that\nshould enable insertion into the view in a manner that pretty much\nparallels Jan's documentation of the rule system. I think the only\nfeature that differs is that the underlying table should maintain a\nunique combination of fields. As a result, the insert rule has been\nmodified from the docs to try to prevent insertion if the combination\nalready exists in the table. A unique index can be added to the table\nas well, but that does not effect the bug I think I've uncovered.\n\nAll works well when individual INSERT commands are used; even\nduplicates are silently ignored as expected.\n\nIf I use a INSERT INTO ... SELECT to do the insertion (again with\nduplicates), however, I get one of two responses depending on whether\nor not there is a unique index on the underlying table:\n\n- no unique index: all duplicates get inserted into the table, an\n indication that the condition imposed within the rule is not being\n obeyed.\n\n- with a unique index: the error message below occurs and nothing is\n inserted into the table, again an indication that the condition is\n not being obeyed.\n\n\tERROR: Cannot insert a duplicate key into a unique index\n\nClearly, something different (and incorrect) occurs for INSERT INTO\n.. SELECT compared with just INSERT.\n\nIf the same rules are being used, why are the duplicates ignored for\nINSERT but not for INSERT INTO ... SELECT? Is this a bug in the rule\nsystem or in my rules?\n\nThanks for your help.\n\nCheers,\nBrook\n\n===========================================================================\ndrop sequence survey_data_id_seq;\ndrop table survey_data;\ncreate table survey_data\n(\n id\t\tserial,\n survey_date\tdate\t\tnot null,\n name\t\ttext\t\tnot null\n--, unique (survey_date, name)\t\t\t-- uncomment to induce \"duplicate key\" errors\n);\n\ndrop view surveys;\ncreate view surveys as\nselect id, survey_date, name from survey_data;\n\ncreate rule surveys_ins as on insert to surveys\ndo instead\ninsert into survey_data (survey_date, name)\n\tselect new.survey_date, new.name where not exists\n\t(select * from survey_data d where d.survey_date = new.survey_date and d.name = new.name);\n\ninsert into surveys (survey_date, name) values ('1999-02-14', 'Me');\ninsert into surveys (survey_date, name) values ('1999-02-15', 'Me');\ninsert into surveys (survey_date, name) values ('1999-02-14', 'You');\ninsert into surveys (survey_date, name) values ('1999-02-14', 'You');\t-- ignored by rule\ninsert into surveys (survey_date, name) values ('1999-02-15', 'You');\ninsert into surveys (survey_date, name) select '1999-02-15', 'You';\t-- ignored by rule\n\nselect * from surveys order by survey_date, name;\ndelete from survey_data;\n\ndrop table X;\ncreate table X\n(\n survey_date\tdate,\n name\t\ttext\n);\n\ninsert into X (survey_date, name) values ('1999-02-14', 'Me');\ninsert into X (survey_date, name) values ('1999-02-15', 'Me');\ninsert into X (survey_date, name) values ('1999-02-14', 'You');\ninsert into X (survey_date, name) values ('1999-02-14', 'You');\t\t-- NOT ignored by rule\ninsert into X (survey_date, name) values ('1999-02-15', 'You');\ninsert into X (survey_date, name) values ('1999-02-15', 'You');\t\t-- NOT ignored by rule\n\n-- if unique index on underlying table, then none of these inserts succeed\n-- otherwise all of them do, including the duplicates\ninsert into surveys (survey_date, name) select survey_date, name from X;\ndrop table X;\n\nselect * from surveys order by survey_date, name;\n", "msg_date": "Wed, 28 Apr 1999 00:54:45 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "rules bug?" } ]
[ { "msg_contents": "Hello!\n\n Long awaited rumor that EGCS will become GCC someday... Finally it\nhappened. RMS announces it: http://lwn.net/daily/gcc.html\n Is Postgres rady to EGCS, I wonder? I remember people had some troubles\nwith EGCS. Anyone using EGCS now?\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Wed, 28 Apr 1999 11:30:08 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "EGCS becomes GCC" }, { "msg_contents": "I'm using egcs over a year and only one time I had a problem with compiling\nearly cvs postgres 6.5. But I suspect it was a egcs problem and after\nupgrading to new snashot of egcs I never get a problem. Right now I use\n1.1.2 release under linux x86 and postgres (6.4.2, 6.5cvs) without \nany problem.\n\n\tOleg\nOn Wed, 28 Apr 1999, Oleg Broytmann wrote:\n\n> Date: Wed, 28 Apr 1999 11:30:08 +0400 (MSD)\n> From: Oleg Broytmann <[email protected]>\n> Reply-To: [email protected]\n> To: PostgreSQL-development <[email protected]>\n> Subject: [HACKERS] EGCS becomes GCC\n> \n> Hello!\n> \n> Long awaited rumor that EGCS will become GCC someday... Finally it\n> happened. RMS announces it: http://lwn.net/daily/gcc.html\n> Is Postgres rady to EGCS, I wonder? I remember people had some troubles\n> with EGCS. Anyone using EGCS now?\n> \n> Oleg.\n> ---- \n> Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n> Programmers don't die, they just GOSUB without RETURN.\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 28 Apr 1999 12:38:44 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] EGCS becomes GCC" }, { "msg_contents": "On Wed, Apr 28, 1999 at 11:30:08AM +0400, Oleg Broytmann wrote:\n> Long awaited rumor that EGCS will become GCC someday... Finally it\n> happened. RMS announces it: http://lwn.net/daily/gcc.html\n> Is Postgres rady to EGCS, I wonder? I remember people had some troubles\n> with EGCS. Anyone using EGCS now?\n\nI'm using it for quite some time without a problem. Version: 1.1.2/2.91.66\non Linux 2.2.6.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 28 Apr 1999 20:14:18 +0200", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] EGCS becomes GCC" } ]
[ { "msg_contents": "> Hello!\n> \n> Long awaited rumor that EGCS will become GCC someday... Finally it\n> happened. RMS announces it: http://lwn.net/daily/gcc.html\n> Is Postgres rady to EGCS, I wonder? I remember people had some troubles\n> with EGCS. Anyone using EGCS now?\n\nI'm testing EGCS 1.1.2 on HPUX 10.20 and 11.00. I'll let you know what I find \nout. (To much to do ... too little time :()\n\n-Ryan\n", "msg_date": "Wed, 28 Apr 1999 01:36:16 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] EGCS becomes GCC" }, { "msg_contents": "\nOn 28-Apr-99 Ryan Bradetich wrote:\n>> Hello!\n>> \n>> Long awaited rumor that EGCS will become GCC someday... Finally it\n>> happened. RMS announces it: http://lwn.net/daily/gcc.html\n>> Is Postgres rady to EGCS, I wonder? I remember people had some troubles\n>> with EGCS. Anyone using EGCS now?\n> \n> I'm testing EGCS 1.1.2 on HPUX 10.20 and 11.00. I'll let you know what I\n> find \n> out. (To much to do ... too little time :()\n> \n\nI'm using egcs on FreeBSD 2.2.x, 3.x and Solaris x86.\nIMHO, It's quite better than gcc 2.[78].x\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n", "msg_date": "Wed, 28 Apr 1999 11:54:04 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] EGCS becomes GCC" } ]
[ { "msg_contents": "> create rule surveys_ins as on insert to surveys\n> do instead\n> insert into survey_data (survey_date, name)\n> \tselect new.survey_date, new.name where not exists\n> \t(select * from survey_data d where d.survey_date = new.survey_date\n> and d.name = new.name);\n> \nSince this is a rewrite rule, the whole statement gets rewritten, thus \nleading to different results, when one statement inserts many rows (insert\ninto ... select)\nor one statement only inserts one row (insert ...).\n\nThe \"problem\" is visibility of data. The rows that have already been\ninserted by this \nsame statement (insert ...select) are not visible to the restricting select.\n\nAndreas \n\n", "msg_date": "Wed, 28 Apr 1999 10:04:00 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] rules bug?" }, { "msg_contents": " > create rule surveys_ins as on insert to surveys\n > do instead\n > insert into survey_data (survey_date, name)\n > \tselect new.survey_date, new.name where not exists\n > \t(select * from survey_data d where d.survey_date = new.survey_date\n > and d.name = new.name);\n\n The \"problem\" is visibility of data. The rows that have already been\n inserted by this \n same statement (insert ...select) are not visible to the restricting select.\n\nThanks for the clear explanation; it makes sense now. But ...\n\nI really need a way to enter data into a table, then disperse it among\na bunch of others while maintaining all the correct relationships.\nRules seem perfect for this, except for this problem.\n\nIs the only way to do this to convert the input table into a bunch of\nindividual INSERT commands (one per row)?\n\nOne way to do this is to use pg_dump to dump the data from the input\ntable, use a script to change target table, and reload the data.\n\nAre there other better ways to do this? other workarounds?\n\nThanks again for your help.\n\nCheers,\nBrook\n", "msg_date": "Wed, 28 Apr 1999 08:19:28 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] rules bug?" } ]
[ { "msg_contents": "\n\tJune 1st. Plan'd release date for v6.5.\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Apr 1999 09:07:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "v6.5 Release Date ..." }, { "msg_contents": "On Wed, 28 Apr 1999, The Hermit Hacker wrote:\n\n> \n> \tJune 1st. Plan'd release date for v6.5.\n\nSo we have lots of time to add new stuff! :) (quit hitting me)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 28 Apr 1999 08:11:21 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.5 Release Date ..." }, { "msg_contents": "On Wed, 28 Apr 1999, Vince Vielhaber wrote:\n\n> On Wed, 28 Apr 1999, The Hermit Hacker wrote:\n> \n> > \n> > \tJune 1st. Plan'd release date for v6.5.\n> \n> So we have lots of time to add new stuff! :) (quit hitting me)\n\nWhat? There aren't enough bugs being found as it is? *searches for\nbaseball bat* *grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Apr 1999 09:31:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] v6.5 Release Date ..." }, { "msg_contents": "Marc G. Fournier wrote:\n\n>\n> June 1st. Plan'd release date for v6.5.\n\n The latest discussions on rules discovered some problems with\n the rewrite system.\n\n It seems that varlevelsup in rule actions subselects get lost\n somehow. And while at it I saw some out of range varno's\n generated for rules group by entries.\n\n I really hope to find the bugs until June 1st, but if not\n this is IMHO a show stopper - no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Thu, 29 Apr 1999 18:11:56 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.5 Release Date ..." }, { "msg_contents": "> Marc G. Fournier wrote:\n> \n> >\n> > June 1st. Plan'd release date for v6.5.\n> \n> The latest discussions on rules discovered some problems with\n> the rewrite system.\n> \n> It seems that varlevelsup in rule actions subselects get lost\n> somehow. And while at it I saw some out of range varno's\n> generated for rules group by entries.\n> \n> I really hope to find the bugs until June 1st, but if not\n> this is IMHO a show stopper - no?\n\nNo bug reports, no show stopper. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Apr 1999 12:15:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.5 Release Date ..." }, { "msg_contents": "On Thu, 29 Apr 1999, Jan Wieck wrote:\n\n> Marc G. Fournier wrote:\n> \n> >\n> > June 1st. Plan'd release date for v6.5.\n> \n> The latest discussions on rules discovered some problems with\n> the rewrite system.\n> \n> It seems that varlevelsup in rule actions subselects get lost\n> somehow. And while at it I saw some out of range varno's\n> generated for rules group by entries.\n> \n> I really hope to find the bugs until June 1st, but if not\n> this is IMHO a show stopper - no?\n\nAs things stand now, we have about 30 days until release...if you find\nthat, as time draws close, you need a few more days, let us know and we'll\npostpone it. I'd like to avoid postponing it *too* much, since we are\nalready doing so by about a month, but if a few extra days will make the\ndifference for you, so be it...\n\nAs long as ppl don't start throwing in *new* code instead of just fixing\n*existing* code, I'm not so 'prickish' about the release date. Its when\nwe get show-stoppers resulting from new code that gets me :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Apr 1999 14:15:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] v6.5 Release Date ..." } ]
[ { "msg_contents": "Matthias Schmitt wrote...\n\n> Hello,\n> \n> this night we discovered here a strange behaviour on our servers. Somebody\n> managed to get access to the UNIX shell using the 'postgres' db\n> administrator account. He logged in some machines with a single try ! The\n> password was not part of any dictionary. He tried some other accounts,\n> without success. Under the user postgres he installed an 'eggdrop' program\n> on the machine, implementing an IRC server.\n\nYikes. Scary.\n\nThe first thing that comes to my mind is a buffer overrun\nin the FE/BE protocol.\n\nThe second thing that comes to mind is sniffed passwords.\n\nLots of questions come up:\n\n1) Is your postmaster listening on a TCP/IP socket? I.E. do you have -i\n as an argument to postmaster when it is running?\n\n2) Have you had any postmaster crashes? Has anyone out there had\n any unexpected postmaster crashes? I'd expect if someone has an\n exploit for such a bug that it would not always work due to\n differences in compilation, probably resulting in a postmaster\n crash.\n\n3) Do you do admin work over the net, i.e. from a client machine on a\n another machine? Would the password go over the wire then? I'm not\n really sure.\n\n4) Do you have a separate account for postmaster, or does it run as 'daemon'\n (I think this is the default for the pgsql distributed by RedHat). If\n so the compramise may have come from a different service.\n\n5) How secure is your lan. \n\nFor now, I'd suggest that people turn off TCP/IP connections unless they\nreally need it (remove -i). Beyond that they may want to filter port\n5432/tcp at a nearby router/firewall. But it is not 100% clear this is\nwhat happened.\n\nInterestinger and interestinger....\n\n-- cary\nCary O'Brien\[email protected]\n", "msg_date": "Wed, 28 Apr 1999 09:04:14 -0400 (EDT)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Hacker found bug in Postgres ?" } ]
[ { "msg_contents": "Well, I thought the explanation (visibility of rows) was clear and\nthat I understood rules. My tests make me more confused, though,\nbecause it doesn't seem to explain everything.\n\nHere is the relevant rule:\n\n create rule surveys_ins as on insert to surveys\n do instead\n insert into survey_data (survey_date, name)\n\t select new.survey_date, new.name where not exists\n\t (select * from survey_data d where d.survey_date = new.survey_date and d.name = new.name);\n\nThe intent is to prevent insertion when the information (survey_date,\nname combination) already exists in the table. My earlier problem was\nthat if an INSERT INTO ... SELECT contained duplicates the entire\nquery would be aborted. The suggestion was that this was because rows\ninserted by the rule are invisible to the subselect within the rule\naction; as a result, the insert would proceed for a duplicate row and\nbe rejected by the underlying unique index.\n\nThis implies (to me) that if the INSERT INTO ... SELECT contains no\nduplicates but does duplicate entries already in the table (which\nshould be visible to the subselect), then the duplicate to-be-inserted\nrows should be filtered out by the rule action. \n\nThis does not happen (see script below). What is wrong here? my\nunderstanding of how subselects work in rules? the rule system?\nsomething else?\n\nThanks again for your help.\n\nCheers,\nBrook\n\n===========================================================================\ndrop sequence survey_data_id_seq;\ndrop table survey_data;\ncreate table survey_data\n(\n id\t\tserial,\n survey_date\tdate\t\tnot null,\n name\t\ttext\t\tnot null,\n unique (survey_date, name)\n);\n\ndrop view surveys;\ncreate view surveys as\nselect id, survey_date, name from survey_data;\n\ncreate rule surveys_ins as on insert to surveys\ndo instead\ninsert into survey_data (survey_date, name)\n\tselect new.survey_date, new.name where not exists\n\t(select * from survey_data d where d.survey_date = new.survey_date and d.name = new.name);\n\ninsert into surveys (survey_date, name) values ('1999-02-14', 'Me');\ninsert into surveys (survey_date, name) values ('1999-02-15', 'Me');\ninsert into surveys (survey_date, name) values ('1999-02-14', 'You');\ninsert into surveys (survey_date, name) values ('1999-02-14', 'You');\t-- correctly ignored by rule\ninsert into surveys (survey_date, name) values ('1999-02-15', 'You');\ninsert into surveys (survey_date, name) select '1999-02-15', 'You';\t-- correctly ignored by rule\n\nselect * from surveys order by id;\n\ndrop table X;\ncreate table X\n(\n survey_date\tdate,\n name\t\ttext,\n unique (survey_date, name)\n);\n\ninsert into X (survey_date, name) values ('1999-02-18', 'Us');\t\t-- new\ninsert into X (survey_date, name) values ('1999-02-14', 'Us');\t\t-- new\ninsert into X (survey_date, name) values ('1999-02-18', 'Me');\t\t-- new\ninsert into X (survey_date, name) values ('1999-02-14', 'Me');\t\t-- duplicates table entry\n\n-- with unique index on underlying table, this does not succeed\n-- even though the duplicates should already be visible to the rule action condition\n-- and therefore filtered out\ninsert into surveys (survey_date, name) select survey_date, name from X;\nselect * from surveys order by id;\n\n-- try again without duplicate\ndelete from X;\ninsert into X (survey_date, name) values ('1999-02-18', 'Us');\t\t-- new\ninsert into X (survey_date, name) values ('1999-02-14', 'Us');\t\t-- new\ninsert into X (survey_date, name) values ('1999-02-18', 'Me');\t\t-- new\n\n-- this succeeds\ninsert into surveys (survey_date, name) select survey_date, name from X;\nselect * from surveys order by id;\n\ndrop table X;\n", "msg_date": "Wed, 28 Apr 1999 09:38:52 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "rule bug again" } ]
[ { "msg_contents": "I need to debug this connection problem I am having with Access97. My\nquestion is how can I get the new Postgres backend that starts up for Access\ninto debug (gdb)? I can get the PostMaster open in gdb.\n\nThanks, Michael\n", "msg_date": "Wed, 28 Apr 1999 11:04:32 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "\nYou can attach to a running process using gdb... 'gdb -t <pid>' or\nsomething like that...its been awhile since I've used it, sorry...\n\nOn Wed, 28 Apr 1999, Michael J Davis wrote:\n\n> I need to debug this connection problem I am having with Access97. My\n> question is how can I get the new Postgres backend that starts up for Access\n> into debug (gdb)? I can get the PostMaster open in gdb.\n> \n> Thanks, Michael\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Apr 1999 13:18:51 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "> \n> I need to debug this connection problem I am having with Access97. My\n> question is how can I get the new Postgres backend that starts up for Access\n> into debug (gdb)? I can get the PostMaster open in gdb.\n> \n> Thanks, Michael\n> \n\n$ postgres -h\npostgres: illegal option -- h\nUsage: postgres [options] [dbname]\n\t...\n -W wait N seconds to allow attach from a debugger\n\nI added the -W option for this purpose. You should start the postmaster\nwith the following option:\n\n postmaster -o '-W 15' ...\n\nThe -W option is passed to the backend which sleeps 15 seconds before doing\nany work. In the meantime you have the time to do a ps, find the backend pid\nand attach gdb to the process.\n\nObviously you can't do that in a production environment because it adda a\nfixed delay for each connection which will make your users very angry.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Thu, 29 Apr 1999 11:28:22 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "Massimo Dal Zotto <[email protected]> writes:\n> The -W option is passed to the backend which sleeps 15 seconds before doing\n> any work. In the meantime you have the time to do a ps, find the backend pid\n> and attach gdb to the process.\n> Obviously you can't do that in a production environment because it adda a\n> fixed delay for each connection which will make your users very angry.\n\nSince it's a -o option, I see no need to force it to be used on every\nconnection. Instead start psql with environment variable\n\tPGOPTIONS=\"-W 15\"\nor whatever you need for the particular session. The PGOPTIONS are sent\nin the connection request and then catenated to whatever the postmaster\nmight have in its -o switch.\n\n(BTW, it might be a good idea to go through the backend command-line\nswitches carefully and see if any of them could be security holes.\nI'm feeling paranoid because of Matthias Schmitt's unresolved report...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 1999 18:33:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb? " }, { "msg_contents": "> Since it's a -o option, I see no need to force it to be used on every\n> connection. Instead start psql with environment variable\n> \tPGOPTIONS=\"-W 15\"\n> or whatever you need for the particular session. The PGOPTIONS are sent\n> in the connection request and then catenated to whatever the postmaster\n> might have in its -o switch.\n> \n> (BTW, it might be a good idea to go through the backend command-line\n> switches carefully and see if any of them could be security holes.\n> I'm feeling paranoid because of Matthias Schmitt's unresolved report...)\n\nI didn't know it did that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Apr 1999 19:56:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb?" } ]
[ { "msg_contents": "The problem I need to debug won't allow me to do this. A Postgres\nconnection starts and then dies very quickly with a user authentication\nerror or database non existence error before I have a chance to attach to\nthe process via gdb. I need to find out why postgres will not allow my\nAccess97 connection to succeed. Pg_hba.conf appears to be configured\ncorrectly (it was working and has not changed in past two months). I need\nto have the new postgres session start up in debug.\n\n\t-----Original Message-----\n\tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 10:19 AM\n\tTo:\tMichael J Davis\n\tCc:\[email protected]\n\tSubject:\tRE: [HACKERS] How do I get the backend server into\ngdb?\n\n\n\tYou can attach to a running process using gdb... 'gdb -t <pid>' or\n\tsomething like that...its been awhile since I've used it, sorry...\n\n\tOn Wed, 28 Apr 1999, Michael J Davis wrote:\n\n\t> I need to debug this connection problem I am having with Access97.\nMy\n\t> question is how can I get the new Postgres backend that starts up\nfor Access\n\t> into debug (gdb)? I can get the PostMaster open in gdb.\n\t> \n\t> Thanks, Michael\n\t> \n\n\tMarc G. Fournier ICQ#7615664 IRC\nNick: Scrappy\n\tSystems Administrator @ hub.org \n\tprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\t\n", "msg_date": "Wed, 28 Apr 1999 12:57:04 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "There is some option that starts backends, and then sleeps waiting for a\ngdb connection, or something like that.\n\n\nOr you can add the sleep yourself.\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> The problem I need to debug won't allow me to do this. A Postgres\n> connection starts and then dies very quickly with a user authentication\n> error or database non existence error before I have a chance to attach to\n> the process via gdb. I need to find out why postgres will not allow my\n> Access97 connection to succeed. Pg_hba.conf appears to be configured\n> correctly (it was working and has not changed in past two months). I need\n> to have the new postgres session start up in debug.\n> \n> \t-----Original Message-----\n> \tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n> \tSent:\tWednesday, April 28, 1999 10:19 AM\n> \tTo:\tMichael J Davis\n> \tCc:\[email protected]\n> \tSubject:\tRE: [HACKERS] How do I get the backend server into\n> gdb?\n> \n> \n> \tYou can attach to a running process using gdb... 'gdb -t <pid>' or\n> \tsomething like that...its been awhile since I've used it, sorry...\n> \n> \tOn Wed, 28 Apr 1999, Michael J Davis wrote:\n> \n> \t> I need to debug this connection problem I am having with Access97.\n> My\n> \t> question is how can I get the new Postgres backend that starts up\n> for Access\n> \t> into debug (gdb)? I can get the PostMaster open in gdb.\n> \t> \n> \t> Thanks, Michael\n> \t> \n> \n> \tMarc G. Fournier ICQ#7615664 IRC\n> Nick: Scrappy\n> \tSystems Administrator @ hub.org \n> \tprimary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n> \t\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Apr 1999 15:53:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> There is some option that starts backends, and then sleeps \n> waiting for a\n> gdb connection, or something like that.\n> \n\nWhy not just the following\n\ngdb postmaster\n\nSet one ore more breakpoints where you think the problem \nmight be, then type 'run'.\n\nStart the client and with a bit of luck gdb will break at\nthe breakpoint set.\n\nYou may need to do this a couple of times until you\nhave found the right area in the code.\n--------\nRegards\nTheo\n", "msg_date": "Wed, 28 Apr 1999 22:07:49 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb?" } ]
[ { "msg_contents": "Then how do you debug issues in the backend?\n\n\t-----Original Message-----\n\tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 10:19 AM\n\tTo:\tMichael J Davis\n\tCc:\[email protected]\n\tSubject:\tRE: [HACKERS] How do I get the backend server into\ngdb?\n\n\n\tYou can attach to a running process using gdb... 'gdb -t <pid>' or\n\tsomething like that...its been awhile since I've used it, sorry...\n\n\tOn Wed, 28 Apr 1999, Michael J Davis wrote:\n\n\t> I need to debug this connection problem I am having with Access97.\nMy\n\t> question is how can I get the new Postgres backend that starts up\nfor Access\n\t> into debug (gdb)? I can get the PostMaster open in gdb.\n\t> \n\t> Thanks, Michael\n\t> \n\n\tMarc G. Fournier ICQ#7615664 IRC\nNick: Scrappy\n\tSystems Administrator @ hub.org \n\tprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n", "msg_date": "Wed, 28 Apr 1999 12:58:21 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" } ]
[ { "msg_contents": "I tired but it breaks in the Postmaster and not in the backend process that\nis created for each process.\n\n\t-----Original Message-----\n\tFrom:\tTheo Kramer [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 2:08 PM\n\tTo:\[email protected]\n\tSubject:\tRe: [HACKERS] How do I get the backend server into\ngdb?\n\n\tBruce Momjian wrote:\n\t> \n\t> There is some option that starts backends, and then sleeps \n\t> waiting for a\n\t> gdb connection, or something like that.\n\t> \n\n\tWhy not just the following\n\n\tgdb postmaster\n\n\tSet one ore more breakpoints where you think the problem \n\tmight be, then type 'run'.\n\n\tStart the client and with a bit of luck gdb will break at\n\tthe breakpoint set.\n\n\tYou may need to do this a couple of times until you\n\thave found the right area in the code.\n\t--------\n\tRegards\n\tTheo\n", "msg_date": "Wed, 28 Apr 1999 15:34:34 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "\nOn 28-Apr-99 Michael J Davis wrote:\n> I tired but it breaks in the Postmaster and not in the backend process that\n> is created for each process.\n\nHave you tried starting the postmaster in debug mode? From man postmaster:\n\n -d [debug_level]\n The optional argument debug_level determines the\n amount of debugging output the backend servers will\n produce. If debug_level is one, the postmaster will\n trace all connection traffic, and nothing else. For\n levels two and higher, debugging is turned on in the\n backend process and the postmaster displays more\n information, including the backend environment and\n process traffic. Note that if no file is specified\n for backend servers to send their debugging output\n then this output will appear on the controlling tty\n of their parent postmaster.\n\nThere other debugging comments in this man page so you may wanna look\nat it first.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 28 Apr 1999 17:05:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "See tcop/postgres.c.\n\n\n> I tired but it breaks in the Postmaster and not in the backend process that\n> is created for each process.\n> \n> \t-----Original Message-----\n> \tFrom:\tTheo Kramer [SMTP:[email protected]]\n> \tSent:\tWednesday, April 28, 1999 2:08 PM\n> \tTo:\[email protected]\n> \tSubject:\tRe: [HACKERS] How do I get the backend server into\n> gdb?\n> \n> \tBruce Momjian wrote:\n> \t> \n> \t> There is some option that starts backends, and then sleeps \n> \t> waiting for a\n> \t> gdb connection, or something like that.\n> \t> \n> \n> \tWhy not just the following\n> \n> \tgdb postmaster\n> \n> \tSet one ore more breakpoints where you think the problem \n> \tmight be, then type 'run'.\n> \n> \tStart the client and with a bit of luck gdb will break at\n> \tthe breakpoint set.\n> \n> \tYou may need to do this a couple of times until you\n> \thave found the right area in the code.\n> \t--------\n> \tRegards\n> \tTheo\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Apr 1999 17:55:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb?" } ]
[ { "msg_contents": "And the answer is an undocumented parameter to the postmaster. The\nparameter \"-W 30\" (must be inside the \"-o \" because must it be sent to\npostgres not to the postmaster) will cause new processes to wait 30 seconds.\nThis gave me enough time to find the pid for the new process and execute\n\"gdb postgres pid\" or \"xxgdb postgres pid\". This pushed the new postgres\nprocess into the debugger where I could set my break points and debug. I\ndiscovered this by stumbling around in the code a little. I suppose if I\nhad looked at the code first, I would not have had to pose my question. The\nresponses to my question surprise me a little. On one hand, jumping in and\nsearching the code for my answer was much, much easier than I expected. On\nthe other hand, I expected others to suggest the techniques they use for\ndebugging issues. It makes me wonder how much real debugging is taking\nplace? After all, this list is for hackers right? Sorry for the\nnegativity. This last issue has me stumped and I am stuck until I can get\nit resolved. I really don't like being stuck very much.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tBruce Momjian [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 1:54 PM\n\tTo:\tMichael J Davis\n\tCc:\[email protected]; [email protected]\n\tSubject:\tRe: [HACKERS] How do I get the backend server into\ngdb?\n\n\tThere is some option that starts backends, and then sleeps waiting\nfor a\n\tgdb connection, or something like that.\n\n\n\tOr you can add the sleep yourself.\n\n\t[Charset iso-8859-1 unsupported, filtering to ASCII...]\n\t> The problem I need to debug won't allow me to do this. A Postgres\n\t> connection starts and then dies very quickly with a user\nauthentication\n\t> error or database non existence error before I have a chance to\nattach to\n\t> the process via gdb. I need to find out why postgres will not\nallow my\n\t> Access97 connection to succeed. Pg_hba.conf appears to be\nconfigured\n\t> correctly (it was working and has not changed in past two months).\nI need\n\t> to have the new postgres session start up in debug.\n\t> \n\t> \t-----Original Message-----\n\t> \tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n\t> \tSent:\tWednesday, April 28, 1999 10:19 AM\n\t> \tTo:\tMichael J Davis\n\t> \tCc:\[email protected]\n\t> \tSubject:\tRE: [HACKERS] How do I get the backend\nserver into\n\t> gdb?\n\t> \n\t> \n\t> \tYou can attach to a running process using gdb... 'gdb -t\n<pid>' or\n\t> \tsomething like that...its been awhile since I've used it,\nsorry...\n\t> \n\t> \tOn Wed, 28 Apr 1999, Michael J Davis wrote:\n\t> \n\t> \t> I need to debug this connection problem I am having with\nAccess97.\n\t> My\n\t> \t> question is how can I get the new Postgres backend that\nstarts up\n\t> for Access\n\t> \t> into debug (gdb)? I can get the PostMaster open in gdb.\n\t> \t> \n\t> \t> Thanks, Michael\n\t> \t> \n\t> \n\t> \tMarc G. Fournier ICQ#7615664\nIRC\n\t> Nick: Scrappy\n\t> \tSystems Administrator @ hub.org \n\t> \tprimary: [email protected] secondary:\n\t> scrappy@{freebsd|postgresql}.org \n\t> \t\n\t> \n\t> \n\n\n\t-- \n\t Bruce Momjian | http://www.op.net/~candle\n\t [email protected] | (610) 853-3000\n\t + If your life is a hard drive, | 830 Blythe Avenue\n\t + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Wed, 28 Apr 1999 15:46:57 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb?" }, { "msg_contents": "Michael J Davis <[email protected]> writes:\n> On the other hand, I expected others to suggest the techniques they use for\n> debugging issues.\n\nWell, the attach-to-a-running-backend procedure has always sufficed for\nmy purposes in backend debugging. (Some other people like to just start\na backend directly from gdb, but of course that's not going to shed much\nlight on connection failures.)\n\nIf you are having an authorization problem, it's not likely the backend\nper se that's at fault --- connection authorization is done in the\npostmaster before forking a subprocess, so you should be able to step\nthrough the postmaster's checks just with gdb on the postmaster.\n\nActually, it'd be best to start with a debugger attached to the frontend\nand see exactly what series of messages go back and forth. That would\nat least give you an idea of what phase is failing. I have no idea how\nto do that in a Windows/Access97 environment, but it's easy enough if\nyou are using psql or some other Unix frontend. (Read the \"protocol\"\nchapter in the developer's guide to know what's supposed to happen,\nthen set breakpoints in connectDB() in fe-connect.c.)\n\nIt might be that just seeing what error message comes back from the\nserver will tell you what you need to know --- it looked to me like\nAccess was unhelpfully providing its own generic message instead of\nquoting what the backend or postmaster had to say. (Note: I installed\nmore verbose connection-rejection messages just last week, so make sure\nyou have up to date sources.)\n\nBTW, on at least some platforms, the most recent versions of gdb are\ncapable of dealing with forked subprocesses intelligently, so that's\nalso a possible answer if you are chasing a problem that occurs\nimmediately after the fork.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 17:49:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How do I get the backend server into gdb? " } ]
[ { "msg_contents": "Yea!!!!! This fixes the problem. I will test the \\g option in a few hours\nafter I have completely repopulated my data. How about adding a comment\nabout the importance of this patch? Thanks Tom for tracking this down so\nquickly. \n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tTom Lane [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 4:29 PM\n\tTo:\tMichael J Davis\n\tCc:\[email protected];\[email protected]\n\tSubject:\tRe: [HACKERS] Issues with the latest 6.5 source \n\n\tMichael J Davis <[email protected]> writes:\n\t> 1)\tThe \\g option in pgsql is failing.\n\n\tFixed. I had managed to break the empty-query response protocol on\n\tSunday --- odd that none of the regression tests detected this.\nSigh.\n\n\t> 2)\tMy Access97 application using ODBC can't connect to the\ndatabase.\n\n\tI now have a strong suspicion that this is caused by the same goof.\n\tODBC might issue an empty query during startup. (That used to be a\n\tnecessary part of the startup protocol; it isn't anymore, but ODBC\n\tvery possibly hasn't been changed.)\n\n\tIf you don't have CVS access and don't want to wait for tonight's\n\tsnapshot, here is the patch:\n\n\t*** src/backend/tcop/dest.c~\tWed Apr 28 18:15:07 1999\n\t--- src/backend/tcop/dest.c\tWed Apr 28 18:15:45 1999\n\t***************\n\t*** 336,342 ****\n\t \t\t\t *\t\ttell the fe that we saw an\nempty query string\n\t \t\t\t * ----------------\n\t \t\t\t */\n\t! \t\t\tpq_putbytes(\"I\", 1);\n\t \t\t\tbreak;\n\t \n\t \t\tcase Local:\n\t--- 336,342 ----\n\t \t\t\t *\t\ttell the fe that we saw an\nempty query string\n\t \t\t\t * ----------------\n\t \t\t\t */\n\t! \t\t\tpq_putbytes(\"I\", 2); /* note we send I and\n\\0 */\n\t \t\t\tbreak;\n\t \n\t \t\tcase Local:\n\n\n\tLet us know if that helps...\n\n\t\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Apr 1999 20:26:13 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Issues with the latest 6.5 source " } ]
[ { "msg_contents": "unsubscribe\n", "msg_date": "Thu, 29 Apr 1999 10:59:26 +0900", "msg_from": "Atsushi Mano <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Could someone give me an example of a select * with LIMIT used \nby 6.5?\n [email protected]\n ICQ: 33017215\n http://www.linuxports.com\n --Power to the Penguin--\n", "msg_date": "Wed, 28 Apr 1999 22:22:58 -0700", "msg_from": "\"Mr. Poet\" <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT" }, { "msg_contents": "\"Mr. Poet\" wrote:\n> \n> Could someone give me an example of a select * with LIMIT \n> used by 6.5?\n\nSELECT * FROM foobar LIMIT 100\n", "msg_date": "Thu, 29 Apr 1999 05:47:35 +0000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "\nBesides, I have problems with SELECT LIMIT on unions in\n6.5beta1. Anyone else?\n\nDirk\n", "msg_date": "Thu, 29 Apr 1999 10:06:18 +0200 (CEST)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "> > Could someone give me an example of a select * with LIMIT \n> > used by 6.5?\n> \n> SELECT * FROM foobar LIMIT 100\n\nOr you could use ROWCOUNT...\n\nset rowcount = 10\nselect * from table\n\nRegards,\n\nMark.\n--\nMark Jewiss\nKnowledge Matters Limited\n\n", "msg_date": "Thu, 29 Apr 1999 10:24:46 +0100 (BST)", "msg_from": "Mark Jewiss <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "\n\nDirk Lutzebaeck ha scritto:\n\n> Besides, I have problems with SELECT LIMIT on unions in\n> 6.5beta1. Anyone else?\n>\n> Dirk\n\nLIMIT doesn't work with UNION, I think this is a known bug.\n\nJos�\n\n", "msg_date": "Thu, 29 Apr 1999 15:28:16 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "Mark Jewiss ha scritto:\n\n> > > Could someone give me an example of a select * with LIMIT\n> > > used by 6.5?\n> >\n> > SELECT * FROM foobar LIMIT 100\n>\n> Or you could use ROWCOUNT...\n>\n> set rowcount = 10\n> select * from table\n>\n> Regards,\n>\n> Mark.\n> --\n> Mark Jewiss\n> Knowledge Matters\n> Limited--------------------------------------------------------------\n\nI don't know nothing about ROWCOUNT.\nSET ROWCOUNT doesn't work for me.\nhygea=> set rowcount = 10;\nERROR: parser: parse error at or near \"10\"\n\nPostgreSQL still accepts SET QUERY_LIMIT but it doesn't work...\n\nhygea=> set query_limit to '1';\nSET VARIABLE\nhygea=> select * from contatori;\ntipologia|tabella |contatore|contatorebis\n---------+---------------+---------+------------\nSOTTO |Modena | 1| 2\nSOPRA |prestazioni | 20|\n(2 rows)\n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n-----------------------------------------------------------------------------------\n\nJos�\n\n\nMark Jewiss ha scritto:\n> > Could someone give me an example of a select\n* with LIMIT\n> > used by 6.5?\n>\n> SELECT * FROM foobar LIMIT 100\nOr you could use ROWCOUNT...\nset rowcount = 10\nselect * from table\nRegards,\nMark.\n--\nMark Jewiss\nKnowledge Matters Limited--------------------------------------------------------------\n\nI don't know nothing about ROWCOUNT.\nSET ROWCOUNT doesn't work for me.\nhygea=> set rowcount = 10;\nERROR:  parser: parse error at or near \"10\"\nPostgreSQL still accepts SET QUERY_LIMIT but it doesn't work...\nhygea=> set query_limit to '1';\nSET VARIABLE\nhygea=> select * from contatori;\ntipologia|tabella        |contatore|contatorebis\n---------+---------------+---------+------------\nSOTTO    |Modena        \n|        1|          \n2\nSOPRA    |prestazioni    |      \n20|\n(2 rows)\n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n-----------------------------------------------------------------------------------\nJosé", "msg_date": "Thu, 29 Apr 1999 15:55:34 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "Mark Jewiss ha scritto:\n\n> > > Could someone give me an example of a select * with LIMIT\n> > > used by 6.5?\n> >\n> > SELECT * FROM foobar LIMIT 100\n>\n> Or you could use ROWCOUNT...\n>\n> set rowcount = 10\n> select * from table\n>\n> Regards,\n>\n> Mark.\n> --\n> Mark Jewiss\n> Knowledge Matters Limited\n\nI know nothing about SET ROWCOUNT, it doesn't work for me.\nI see v6.5 accepts still SET QUERY_LIMIT TO #\nbut it doesn't work too.\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n", "msg_date": "Thu, 29 Apr 1999 16:02:15 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "> I don't know nothing about ROWCOUNT.\n> SET ROWCOUNT doesn't work for me.\n> hygea=> set rowcount = 10;\n\nSorry, this is my faul. Correct SQL syntax is\n\nset rowcount 10\n\nCheers,\n\nMark.\n\n", "msg_date": "Thu, 29 Apr 1999 15:57:38 +0100 (BST)", "msg_from": "Mark Jewiss <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" }, { "msg_contents": "hygea=> set rowcount 10;\nERROR: parser: parse error at or near \"10\"\n\nWhich version of Postgres are you using?\n\n\nMark Jewiss ha scritto:\n\n> > I don't know nothing about ROWCOUNT.\n> > SET ROWCOUNT doesn't work for me.\n> > hygea=> set rowcount = 10;\n>\n> Sorry, this is my faul. Correct SQL syntax is\n>\n> set rowcount 10\n>\n> Cheers,\n>\n> Mark.\n\n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n", "msg_date": "Thu, 29 Apr 1999 17:45:10 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] LIMIT" } ]
[ { "msg_contents": "\nI just received this. Any ideas ?\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Wed, 28 Apr 1999 16:36:56 -0500\nFrom: Brian P Millett <[email protected]>\nTo: Peter T Mount <[email protected]>\nSubject: FYI: snapshot 4/28/1999\n\nPeter, thought that you would like to know that the snapshot for today\nwill not allow a jdbc connection. I don't know if this is a jdk\nsecurity thing, or not. The snapshot from Monday, allows connections.\n\nI'm using jdk1.2.1, solaris 2.7, WorkShop Compilers 5.0 98/12/15 C 5.0\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n", "msg_date": "Thu, 29 Apr 1999 06:50:23 +0100 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "FYI: snapshot 4/28/1999 (fwd)" } ]
[ { "msg_contents": "The following patch was posted this evening by Tom Lane that could fix your\nproblem. It fixed a similar problem I was having with ODBC.\n\nIf you don't have CVS access and don't want to wait for tonight's snapshot,\nhere is the patch:\n*** src/backend/tcop/dest.c~\tWed Apr 28 18:15:07 1999\n--- src/backend/tcop/dest.c\tWed Apr 28 18:15:45 1999\n***************\n*** 336,342 ****\n*\ttell the fe that we saw an empty query string\n\t \t\t\t * ----------------\n\t \t\t\t */\n\t! \t\t\tpq_putbytes(\"I\", 1);\n\t \t\t\tbreak;\n\t \n\t \t\tcase Local:\n\t--- 336,342 ----\n*\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\", 2); /* note we send I and \\0 */\n \t\t\tbreak;\n \n \t\tcase Local:\n\n\n\t-----Original Message-----\n\tFrom:\tPeter T Mount [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 11:50 PM\n\tTo:\tPostgreSQL Hackers List\n\tSubject:\t[HACKERS] FYI: snapshot 4/28/1999 (fwd)\n\n\n\tI just received this. Any ideas ?\n\n\t-- \n\t Peter T Mount [email protected]\n\t Main Homepage: http://www.retep.org.uk\n\tPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n\t Java PDF Generator: http://www.retep.org.uk/pdf\n\n\t---------- Forwarded message ----------\n\tDate: Wed, 28 Apr 1999 16:36:56 -0500\n\tFrom: Brian P Millett <[email protected]>\n\tTo: Peter T Mount <[email protected]>\n\tSubject: FYI: snapshot 4/28/1999\n\n\tPeter, thought that you would like to know that the snapshot for\ntoday\n\twill not allow a jdbc connection. I don't know if this is a jdk\n\tsecurity thing, or not. The snapshot from Monday, allows\nconnections.\n\n\tI'm using jdk1.2.1, solaris 2.7, WorkShop Compilers 5.0 98/12/15 C\n5.0\n\n\t--\n\tBrian Millett\n\tEnterprise Consulting Group \"Heaven can not exist,\n\t(314) 205-9030 If the family is not eternal\"\n\[email protected] F. Ballard Washburn\n\n\t\n", "msg_date": "Thu, 29 Apr 1999 01:03:53 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] FYI: snapshot 4/28/1999 (fwd)" } ]
[ { "msg_contents": "The weird thing is that the JDBC driver hasn't been sending an empty\nstring since 6.2.x. Starting from 6.3, I replaced the empty string with\na query to fetch the current DATESTYLE so the date and time code would\nwork properly.\n\nIt would be interesting to see if this patch fixes it however.\n\nPeter (@work)\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Michael J Davis [mailto:[email protected]]\nSent: Thursday, April 29, 1999 7:04 AM\nTo: 'Peter T Mount'; PostgreSQL Hackers List\nSubject: RE: [HACKERS] FYI: snapshot 4/28/1999 (fwd)\n\n\nThe following patch was posted this evening by Tom Lane that could fix\nyour\nproblem. It fixed a similar problem I was having with ODBC.\n\nIf you don't have CVS access and don't want to wait for tonight's\nsnapshot,\nhere is the patch:\n*** src/backend/tcop/dest.c~\tWed Apr 28 18:15:07 1999\n--- src/backend/tcop/dest.c\tWed Apr 28 18:15:45 1999\n***************\n*** 336,342 ****\n*\ttell the fe that we saw an empty query string\n\t \t\t\t * ----------------\n\t \t\t\t */\n\t! \t\t\tpq_putbytes(\"I\", 1);\n\t \t\t\tbreak;\n\t \n\t \t\tcase Local:\n\t--- 336,342 ----\n*\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\", 2); /* note we send I and \\0 */\n \t\t\tbreak;\n \n \t\tcase Local:\n\n\n\t-----Original Message-----\n\tFrom:\tPeter T Mount [SMTP:[email protected]]\n\tSent:\tWednesday, April 28, 1999 11:50 PM\n\tTo:\tPostgreSQL Hackers List\n\tSubject:\t[HACKERS] FYI: snapshot 4/28/1999 (fwd)\n\n\n\tI just received this. Any ideas ?\n\n\t-- \n\t Peter T Mount [email protected]\n\t Main Homepage: http://www.retep.org.uk\n\tPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n\t Java PDF Generator: http://www.retep.org.uk/pdf\n\n\t---------- Forwarded message ----------\n\tDate: Wed, 28 Apr 1999 16:36:56 -0500\n\tFrom: Brian P Millett <[email protected]>\n\tTo: Peter T Mount <[email protected]>\n\tSubject: FYI: snapshot 4/28/1999\n\n\tPeter, thought that you would like to know that the snapshot for\ntoday\n\twill not allow a jdbc connection. I don't know if this is a jdk\n\tsecurity thing, or not. The snapshot from Monday, allows\nconnections.\n\n\tI'm using jdk1.2.1, solaris 2.7, WorkShop Compilers 5.0 98/12/15\nC\n5.0\n\n\t--\n\tBrian Millett\n\tEnterprise Consulting Group \"Heaven can not exist,\n\t(314) 205-9030 If the family is not eternal\"\n\[email protected] F. Ballard Washburn\n\n\t\n", "msg_date": "Thu, 29 Apr 1999 08:31:43 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] FYI: snapshot 4/28/1999 (fwd)" }, { "msg_contents": "Peter Mount <[email protected]> writes:\n> The weird thing is that the JDBC driver hasn't been sending an empty\n> string since 6.2.x.\n> It would be interesting to see if this patch fixes it however.\n\nI'm unsure whether to blame my empty-query goof for the JDBC problem or\nnot. You say Monday's snapshot was OK --- but I committed that booboo\non Sunday, so it should have been in Monday's snap. Let us know whether\nthe problem is still there...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 1999 10:19:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FYI: snapshot 4/28/1999 (fwd) " }, { "msg_contents": "Tom Lane wrote:\n\n> Peter Mount <[email protected]> writes:\n> > The weird thing is that the JDBC driver hasn't been sending an empty\n> > string since 6.2.x.\n> > It would be interesting to see if this patch fixes it however.\n>\n> I'm unsure whether to blame my empty-query goof for the JDBC problem or\n> not. You say Monday's snapshot was OK --- but I committed that booboo\n> on Sunday, so it should have been in Monday's snap. Let us know whether\n> the problem is still there...\n\nWell Tom,\n The patch fixed it. With the snapshot from 4/28/1999, I can now\nconnect.\n\nThanks Peter, Michael & Tom.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Thu, 29 Apr 1999 09:59:05 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FYI: snapshot 4/28/1999 (fwd)" } ]
[ { "msg_contents": "Hi all,\n\nI have two tables MOVIMENTAZIONI with 7650 rows\nCAPI with 7650 rows, when I try to join this two tables PostgreSQL\ntakes more than 107 minutes to retrieve rows,\nthe same query in:\nI have installed Oracle-8 and Informix-se in the same computer\nand the same query takes:\n- Informix about 6 seconds.\n- Oracle about 2 seconds.\n\nI tried it also in:\n- M$-Access about 3 seconds.\n\nI'm sure this is not a vaccum problem because I executed vacuum before\nI ran the query.\n\n\nCREATE TABLE capi (\n matricola CHAR(15) NOT NULL,\n specie CHAR(2) NOT NULL,\n nascita DATE,\n sesso CHAR(1) DEFAULT 'F',\n razza CHAR(3),\n madre CHAR(15),\n padre CHAR(15),\n azienda_origine CHAR(08),\n fiscale_origine CHAR(16),\n paese_origine CHAR(03),\n iscritto BOOLEAN,\n data_aggiornamento TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (matricola,specie)\n );\n\nCREATE TABLE movimentazioni (\n azienda CHAR(11) NOT NULL,\n specie CHAR(2) NOT NULL,\n matricola CHAR(15) NOT NULL,\n data_introduzione DATE NOT NULL,\n tipo_introduzione CHAR(2),\n azienda_provenienza CHAR(8),\n fiscale_provenienza CHAR(16),\n matricola_precedente CHAR(15),\n data_applicazione DATE,\n data_uscita DATE,\n ragione_uscita CHAR(1),\n tipo_destinazione CHAR(1),\n azienda_destinazione CHAR(8),\n fiscale_destinazione CHAR(16),\n paese_destinazione CHAR(3),\n Mattatoio CHAR(19),\n n_proprietario INTEGER,\n data_aggiornamento TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (azienda,matricola,specie,data_introduzione)\n );\n\n$ psql -c 'vacuum'\nVACUUM\n$ time psql -f test.sql 2>/dev/null >/dev/null\nSELECT movimentazioni.azienda\nFROM movimentazioni,capi\nwhere ((capi.matricola = movimentazioni.matricola )\nand (capi.specie = movimentazioni.specie ) );\n\nreal 107m48.354s\nuser 0m1.140s\nsys 0m0.040s\n\nInformix-se:\nreal 0m6.348s\nuser 0m2.250s\nsys 0m0.140s\n\nOracle-8:\nreal 0m2.118s\nuser 0m0.780s\nsys 0m0.120s\n\nThis is my environment:\n\n- [PostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3]\n- Pgsql snapshot Apr 15 17:42\n- postmaster -i -o -F -B 512 -S\n- Linux 2.0.36 Debian\n- cpu: 586\n- model: Pentium MMX\n- vendor_id: GenuineIntel\n- RAM: 63112\n- Swap: 102812\n\nI tried the same query in v6.4 with best results.\nreal 3m45.968s\nuser 0m0.060s\nsys 0m0.160s\n\nI tried the same query with joins inverted as:\n$ psql -c 'vacuum'\nVACUUM\n$ time psql -f test.sql 2>/dev/null >/dev/null\nSELECT movimentazioni.azienda\nFROM movimentazioni,capi\nwhere (capi.specie = movimentazioni.specie )\nand ((capi.matricola = movimentazioni.matricola ))\nreal 0m4.312s\nuser 0m1.220s\nsys 0m0.090s\n\nPostgreSQL version 6.4:\nreal 0m0.600s\nuser 0m0.130s\nsys 0m0.030s\n\nI thougth v6.5 was faster than v6.4\n\nAny ideas?\nJose'\n\n\n", "msg_date": "Thu, 29 Apr 1999 10:30:46 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "Pg takes at least 2 hours to retrieve 7650 rows" }, { "msg_contents": "Jos� Soares wrote:\n> \n> $ time psql -f test.sql 2>/dev/null >/dev/null\n> SELECT movimentazioni.azienda\n> FROM movimentazioni,capi\n> where ((capi.matricola = movimentazioni.matricola )\n> and (capi.specie = movimentazioni.specie ) );\n> \nEXPLAIN ?\n\n> $ time psql -f test.sql 2>/dev/null >/dev/null\n> SELECT movimentazioni.azienda\n> FROM movimentazioni,capi\n> where (capi.specie = movimentazioni.specie )\n> and ((capi.matricola = movimentazioni.matricola ))\n\nEXPLAIN ?\n\nVadim\n", "msg_date": "Thu, 29 Apr 1999 16:47:29 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Pg takes at least 2 hours to retrieve 7650 rows" }, { "msg_contents": "Vadim Mikheev ha scritto:\n\n> Jos� Soares wrote:\n> >\n> > $ time psql -f test.sql 2>/dev/null >/dev/null\n> > SELECT movimentazioni.azienda\n> > FROM movimentazioni,capi\n> > where ((capi.matricola = movimentazioni.matricola )\n> > and (capi.specie = movimentazioni.specie ) );\n> >\n> EXPLAIN ?\n>\n\nexplain SELECT movimentazioni.azienda\nFROM movimentazioni,capi\nwhere ((capi.matricola = movimentazioni.matricola )\nand (capi.specie = movimentazioni.specie ) );\nNOTICE: QUERY PLAN:\n\nHash Join (cost=1221.38 size=2 width=60)\n -> Seq Scan on movimentazioni (cost=349.64 size=7565 width=36)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on capi (cost=335.55 size=7562 width=24)\n\nEXPLAIN\n\n>\n> > $ time psql -f test.sql 2>/dev/null >/dev/null\n> > SELECT movimentazioni.azienda\n> > FROM movimentazioni,capi\n> > where (capi.specie = movimentazioni.specie )\n> > and ((capi.matricola = movimentazioni.matricola ))\n>\n> EXPLAIN ?\n>\n> Vadim\n\nexplain SELECT movimentazioni.azienda\nFROM movimentazioni,capi\nwhere (capi.specie = movimentazioni.specie )\nand ((capi.matricola = movimentazioni.matricola ));\nNOTICE: QUERY PLAN:\n\nHash Join (cost=1221.38 size=2 width=60)\n -> Seq Scan on movimentazioni (cost=349.64 size=7565 width=36)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on capi (cost=335.55 size=7562 width=24)\n\nEXPLAIN\n\n\n", "msg_date": "Thu, 29 Apr 1999 15:14:20 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Pg takes at least 2 hours to retrieve 7650 rows" } ]
[ { "msg_contents": "\n> real 107m48.354s\n> user 0m1.140s\n> sys 0m0.040s\n> \nPlease give us output of:\n\texplain SELECT movimentazioni.azienda\n> FROM movimentazioni,capi\n> where ((capi.matricola = movimentazioni.matricola )\n> and (capi.specie = movimentazioni.specie ) );\n> \n\tAndreas\n\n\tPS: what it should do is seq scan on movimentazioni and index path\non capi\n\twhat it could do if it was implemented is full index scan on\nmovimentazioni\n", "msg_date": "Thu, 29 Apr 1999 10:59:11 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Pg takes at least 2 hours to retrieve 7650 rows" } ]
[ { "msg_contents": "Folks,\n\nNo one on general or novice have had any insight into the following\nproblem.\n\nI have a very large table (10Gb, 20 million records each with 54 fields) \nwith both float, integer and text values. It's an astronomical\ndatabase for the 2MASS project (http://pegasus.phast.umass.edu). This\ndatabase will be 10 times larger in the end . . . \n\nAnyway, if I submit a query such as:\n\n\tselect * from mytable where x=3.14 and y=6.28;\n\n\nit takes about 3 minutes to return the record. Both x and y are indexed:\n\n\tcreate index xindex on mytable using btree (x);\n\tcreate index yindex on mytable using btree (y);\n\nAnd \"explain\" on the select query above says it's doing a sequential scan. \n\nHowever if I say:\n\n\tselect * from mytable where x='3.14'::float4 and y='6.28'::float4;\n\nit takes about 3 seconds! And now \"explain\" says it's doing an indexed\nscan.\n\nMy understanding is that the query optimizer should pick the index\nscan for this query based on the cost. My attempts at debugging\nhave not turned up anything obvious to me.\n\nIs there a problem with my set up or is this a known problem? Is \nthere something I can do as a work around to make this efficient? \n\nI would like PostgreSQL to succeed in this application, if possible,\nso that it can be adopted. If we can get this working, I would\nrecommend that the astronomical community consider adopting this\nPostgreSQL to \"spin\" portions of this database (if you think this\nis reasonable).\n\nBTW, this is PostgreSQL 6.4.2 on a dual Xeon running Linux 2.2.5,\nover Debian 2.1.\n\nThanks!\n\n--Martin\n\n===========================================================================\n\nProf. Martin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\nUniversity of Massachusetts\nAmherst, MA 01003-4525\n", "msg_date": "Thu, 29 Apr 1999 11:21:41 -0400", "msg_from": "Martin Weinberg <[email protected]>", "msg_from_op": true, "msg_subject": "Help/advice/suggestions on query optimizer for a large table" }, { "msg_contents": "> Anyway, if I submit a query such as:\n> select * from mytable where x=3.14 and y=6.28;\n> it takes about 3 minutes to return the record. Both x and y are indexed:\n> And \"explain\" on the select query above says it's doing a sequential scan.\n> However if I say:\n> select * from mytable where x='3.14'::float4 and y='6.28'::float4;\n> it takes about 3 seconds! And now \"explain\" says it's doing an indexed\n> scan.\n> My understanding is that the query optimizer should pick the index\n> scan for this query based on the cost.\n\nThis is a known feature. The Postgres parser converts an unquoted 3.14\nto a float8, which is not the same as the float4 column you have\nindexed. And the optimizer is not (yet) bright enough to convert\nconstants to the column type, and then use the available indexes.\n\nIn fact, the apparently more desirable strategy is not particularly\neasy to get right. Look at this example:\n\n create table t1 (i int4);\n (insert a bunch of data)\n create index tx on t1 using btree(i);\n vacuum;\n select * from t1 where i < 3.5;\n\nIn this case, we can't convert the 3.5 to an integer (3) without\nchanging the comparison operator to \"<=\". And in your case,\n\"downconverting\" the float8 to a float4 probably would risk the same\nproblem. So Postgres *promotes* the float4s to float8s, and has to do\na sequential scan along the way.\n\nAnyway, afaik you have two options. The first is to surround the\n\"3.14\" in your example with single quotes (probably the coersion to\nfloat4 is unnecessary). The second is to create a function index on\nyour table, to allow your queries to use float8 arguments\ntransparently:\n\n create index mx on mytable using btree (float8(x));\n create index my on mytable using btree (float8(y));\n\nIf you are hiding all of the queries inside an app, then I'd suggest\nthe first technique. If you are going to be doing a lot of direct SQL,\nthen you might want to use the second.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 29 Apr 1999 16:29:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Help/advice/suggestions on query optimizer for a large\n\ttable" }, { "msg_contents": "\nOK, so here is the answer to this question. Thanks Thomas.\n\n\n\n> > Anyway, if I submit a query such as:\n> > select * from mytable where x=3.14 and y=6.28;\n> > it takes about 3 minutes to return the record. Both x and y are indexed:\n> > And \"explain\" on the select query above says it's doing a sequential scan.\n> > However if I say:\n> > select * from mytable where x='3.14'::float4 and y='6.28'::float4;\n> > it takes about 3 seconds! And now \"explain\" says it's doing an indexed\n> > scan.\n> > My understanding is that the query optimizer should pick the index\n> > scan for this query based on the cost.\n> \n> This is a known feature. The Postgres parser converts an unquoted 3.14\n> to a float8, which is not the same as the float4 column you have\n> indexed. And the optimizer is not (yet) bright enough to convert\n> constants to the column type, and then use the available indexes.\n> \n> In fact, the apparently more desirable strategy is not particularly\n> easy to get right. Look at this example:\n> \n> create table t1 (i int4);\n> (insert a bunch of data)\n> create index tx on t1 using btree(i);\n> vacuum;\n> select * from t1 where i < 3.5;\n> \n> In this case, we can't convert the 3.5 to an integer (3) without\n> changing the comparison operator to \"<=\". And in your case,\n> \"downconverting\" the float8 to a float4 probably would risk the same\n> problem. So Postgres *promotes* the float4s to float8s, and has to do\n> a sequential scan along the way.\n> \n> Anyway, afaik you have two options. The first is to surround the\n> \"3.14\" in your example with single quotes (probably the coersion to\n> float4 is unnecessary). The second is to create a function index on\n> your table, to allow your queries to use float8 arguments\n> transparently:\n> \n> create index mx on mytable using btree (float8(x));\n> create index my on mytable using btree (float8(y));\n> \n> If you are hiding all of the queries inside an app, then I'd suggest\n> the first technique. If you are going to be doing a lot of direct SQL,\n> then you might want to use the second.\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:40:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Help/advice/suggestions on query optimizer for a large\n\ttable" } ]
[ { "msg_contents": "Peter, I hope this long and boring message can shed some light on my\ndifficulties getting jdbc & postgres6.5b1(current snapshot) to work\nwith blobs. I have NO problem with text, numeric, etc. Just blobs &\nthe LO interface.\n\nI feel that it is a 64 vs 32 bit memory management problem.\n\nMy system:\n\nuname -a\nSunOS vlad 5.7 Generic_106541-03 sun4u sparc SUNW,Ultra-5_10\n\npostgreSQL:\n[PostgreSQL 6.5.0 on sparc-sun-solaris2.7, compiled by cc ]\n\ncc -V\ncc: WorkShop Compilers 5.0 98/12/15 C 5.0\n\nThe database & tables:\n\nmini_stores=> select * from item;\nitem_num|item_picture|item_descr\n|ship_unit |unit_price|stock\n--------+------------+--------------------------------------------------------------+----------+----------+-----\n\n 1| 18602|Maximum protection for high-mileage\nrunners |pair |$75.50 | 1000\n 2| 18617|Customize your mountain bike with extra-durable\ncrankset |each |$20.00 | 500\n 3| 18634|Long drive golf balls -fluorescent\nyellow |pack of 12|$50.00 | 200\n 4| 18652|Your first season's baseball\nglove |pair |$25.00 | 250\n 5| 18668|Minimum chin contact, feather-light, maximum\nprotection helmet|each |$35.50 | 50\n(5 rows)\n\n\nNow the java code that reads the item table and tries to display the\nimage & data. This is \"borrowed\" from an example from informix &\ntheir store7/mini_stores jdbc examples.\n-----BEGIN---\n/***************************************************************************\n\n *\n * Title: demo4.java\n *\n * Description: To use the fetch_buf_size connection parameter to\n * optimize the query and retrieve large byte data in\nthe\n * form of an image\n *\n * An example of running the program:\n *\n * java demo4 jdbc:postgresql:mini_stores <userid> <password>\n *\n ***************************************************************************\n\n*/\n\nimport java.awt.*;\nimport java.awt.image.*;\nimport java.awt.event.*;\nimport java.sql.*;\n\npublic class demo4 extends Frame {\n static String connURL; // the URL to be used for establishing the\nJDBC\n static String usr; // the user id to connect with.\n static String pwd; // the pasword of the user id.\n\n Connection conn; // the database connection\n ResultSet queryResults; // the ResultSet containing the rows\nreturned by\n\n // The GUI controls\n Label itemNumLabel;\n TextField itemNumField;\n Label itemDescrLabel;\n TextArea itemDescrArea;\n Label shipUnitLabel;\n TextField shipUnitField;\n Label unitPriceLabel;\n TextField unitPriceField;\n Label stockLabel;\n TextField stockField;\n Label itemPictureLabel;\n ImgCanvas itemPictureCanvas;\n Button nextRowBtn;\n\n\n public static void main(String args[]) {\n if (args.length != 0) {\n connURL = args[0];\n usr = args[1];\n pwd = args[2];\n demo4 thisDemo = new demo4();\n } else {\n System.out.println(\"Missing input!! Please specify the connection\nURL\");\n System.out.println(\"Usage...java demo4 <jdbc_conn_url>\");\n }\n }\n\n public demo4() {\n super();\n createControls();\n connectToDBServer();\n executeQuery();\n }\n\n private void createControls() {\n setLayout(null);\n setSize(700,500);\n setVisible(true);\n setTitle(\"Item Table Data\");\n\n // Adding label & text-field for item_num column\n itemNumLabel = new Label(\"Item Num\");\n itemNumLabel.setBounds(12,24,60,24);\n add(itemNumLabel);\n itemNumField = new TextField();\n itemNumField.setBounds(84,24,50,38);\n add(itemNumField);\n\n // Adding label and text-area for item_descr column\n itemDescrLabel = new Label(\"Item Description\");\n itemDescrLabel.setBounds(12,72,96,24);\n add(itemDescrLabel);\n itemDescrArea = new TextArea();\n itemDescrArea.setBounds(12,108,194,168);\n add(itemDescrArea);\n\n // Adding label & text-field for ship_unit column\n shipUnitLabel = new Label(\"Ship Unit\");\n shipUnitLabel.setBounds(24,288,60,38);\n add(shipUnitLabel);\n shipUnitField = new TextField();\n shipUnitField.setBounds(84,288,84,38);\n add(shipUnitField);\n\n // Adding label & text-field for unit_price column\n unitPriceLabel = new Label(\"Unit Price\");\n unitPriceLabel.setBounds(24,336,60,38);\n add(unitPriceLabel);\n unitPriceField = new TextField();\n unitPriceField.setBounds(84,336,84,38);\n add(unitPriceField);\n\n // Adding label & text-field for stock column\n stockLabel = new Label(\"Stock\");\n stockLabel.setBounds(36,384,36,38);\n add(stockLabel);\n stockField = new TextField();\n stockField.setBounds(84,384,84,38);\n add(stockField);\n\n // Adding label & ImgCanvas for item_picture column\n itemPictureLabel = new Label(\"Item Picture\");\n itemPictureLabel.setBounds(216,24,72,24);\n add(itemPictureLabel);\n itemPictureCanvas = new ImgCanvas();\n itemPictureCanvas.setBounds(216,60,480,432);\n add(itemPictureCanvas);\n\n // Adding Next Row button\n nextRowBtn = new Button(\"Next Row\");\n nextRowBtn.setBounds(60,432,84,48);\n add(nextRowBtn);\n\n // Adding actionListener for button action events\n nextRowBtn.addActionListener((ActionListener)\n new nextRowBtnActionListener());\n\n // Adding WindowListener for the main frame to process window events\n addWindowListener((WindowListener) new demo4WindowAdapter());\n }\n\n private void connectToDBServer() {\n try {\n String psqlDriver = \"postgresql.Driver\";\n\n // Register the POSTGRESQL-JDBC driver\n Driver PgsqlDrv = (Driver) Class.forName(psqlDriver).newInstance();\n\n // Get a connection to the database server\n conn = DriverManager.getConnection(connURL, usr, pwd);\n\n System.out.println(\"Driver loaded...and connection established\");\n } catch (Exception e) {\n System.out.println(\"Could not connect to database server....\");\n System.out.println(e.getMessage());\n }\n }\n\n private void executeQuery() {\n // The select statement to be used for querying the item table\n String selectStmt = \"SELECT * FROM item\";\n\n try {\n // Create a Statement object and use it to execute the query\n Statement stmt = conn.createStatement();\n queryResults = stmt.executeQuery(selectStmt);\n\n System.out.println(\"Query executed...\");\n } catch (Exception e) {\n System.out.println(\"Could not execute query....\");\n System.out.println(e.getMessage());\n }\n }\n\n\n // Private inner class extending java.awt.Canvas\n // Will be used for displaying the item_picture image\n private class ImgCanvas extends Canvas {\n Image myImage;\n\n private ImgCanvas() {\n super();\n myImage = null;\n }\n\n private void setImage(Image img) {\n myImage = img;\n }\n\n public void paint(Graphics g) {\n if(myImage == null)\n return;\n else\n g.drawImage(myImage, 0, 0, this);\n }\n }\n\n // Private inner class implementing the ActionListener interface\n // for the 'Next Row' button\n private class nextRowBtnActionListener implements ActionListener {\n // This is the code that will be executed whenever the 'Next Row'\n // button is pressed\n // Here, we get the values of all the columns of the current row\n // and display them\n public void actionPerformed(ActionEvent evt) {\n try {\n if (queryResults.next()) {\n // Getting the values of the item_num, ship_unit\n // unit_price, item_descr & stock columns and displaying them\n itemNumField.setText(queryResults.getString(1));\n itemDescrArea.setText(queryResults.getString(3));\n shipUnitField.setText(queryResults.getString(4));\n unitPriceField.setText(queryResults.getString(5));\n stockField.setText(queryResults.getString(6));\n\n // Getting the value of the item_picture column and\n // displaying it\n System.err.println(\"Got oid \"+queryResults.getInt(2));\n byte itemPictureArray [] = queryResults.getBytes(2);\n if (itemPictureArray != null) {\n Image img =\nToolkit.getDefaultToolkit().createImage(itemPictureArray);\n itemPictureCanvas.setImage(img);\n itemPictureCanvas.repaint();\n }\n } else {\n System.out.println(\"No more rows!!\");\n }\n } catch (Exception e) {\n System.out.println(\"Could not display next row...\");\n System.out.println(e.getMessage());\n }\n }\n }\n\n\n // Private inner class implementing the WindowAdapter interface\n // for the main frame window\n private class demo4WindowAdapter extends WindowAdapter {\n // This is the code that will be executed whenever the frame\n // window is closed using the button in its upper-right corner\n public void windowClosing(WindowEvent evt) {\n setVisible(false); // hide the main frame\n dispose(); // free the system resources\n System.exit(0); // close the application\n }\n }\n}\n------END----\n\nNow I start it & attach to the backend with gdb. After the frame is\nup, I can select the next row (the first). It works as it should.\nWhen I select next (for the second), then I get the following back\ntrace:\n\nvlad: ps -ef | grep po\n root 272 1 0 Apr 27 ? 0:00 /usr/lib/power/powerd\n bpm 771 1 0 10:45:48 pts/2 0:00 /opt/pgsql/bin/postmaster\n-o -F -o /opt/pgsql/logs/backend.log -S 16384 -i -p 5\n bpm 796 771 1 10:53:49 pts/2 0:00 /opt/pgsql/bin/postmaster\n-o -F -o /opt/pgsql/logs/backend.log -S 16384 -i -p 5\n bpm 798 1153 0 10:53:53 pts/10 0:00 grep po\nvlad: gdb /opt/pgsql/bin/postmaster 796\nGNU gdb 4.17\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you\nare\nwelcome to change it and/or distribute copies of it under certain\nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for\ndetails.\nThis GDB was configured as \"sparc-sun-solaris2.7\"...\n(no debugging symbols found)...\n\n/home/bpm/development/perl_ext/DBD-Pg-0.91/796: No such file or\ndirectory.\nAttaching to program `/opt/pgsql/bin/postmaster', process 796\nReading symbols from /usr/lib/libgen.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libcrypt_i.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libnsl.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libsocket.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libdl.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libm.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libcurses.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libc.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libmp.so.2...(no debugging symbols\nfound)...done.\nReading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...\n---Type <return> to continue, or q <return> to quit---\n(no debugging symbols found)...done.\nReading symbols from /usr/lib/nss_files.so.1...(no debugging symbols\nfound)...\ndone.\nSymbols already loaded for /usr/lib/libgen.so.1\nSymbols already loaded for /usr/lib/libcrypt_i.so.1\nSymbols already loaded for /usr/lib/libnsl.so.1\nSymbols already loaded for /usr/lib/libsocket.so.1\nSymbols already loaded for /usr/lib/libdl.so.1\nSymbols already loaded for /usr/lib/libm.so.1\nSymbols already loaded for /usr/lib/libcurses.so.1\nSymbols already loaded for /usr/lib/libc.so.1\nSymbols already loaded for /usr/lib/libmp.so.2\nSymbols already loaded for\n/usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\nSymbols already loaded for /usr/lib/nss_files.so.1\n0xff19311c in _so_recv ()\n(gdb) bt\n#0 0xff19311c in _so_recv ()\n#1 0xbb4d4 in pq_recvbuf ()\n#2 0xbb710 in pq_getbytes ()\n#3 0x13e020 in SocketBackend ()\n#4 0x13e124 in ReadCommand ()\n#5 0x140434 in PostgresMain ()\n#6 0x11161c in DoBackend ()\n#7 0x110e78 in BackendStartup ()\n#8 0x10fd98 in ServerLoop ()\n#9 0x10f6e0 in PostmasterMain ()\n#10 0xbca58 in main ()\n(gdb) c\nContinuing.\n\nProgram received signal SIGBUS, Bus error.\n0x19f180 in AllocSetAlloc ()\n(gdb) bt\n#0 0x19f180 in AllocSetAlloc ()\n#1 0x19fa2c in GlobalMemoryAlloc ()\n#2 0x19f7b8 in MemoryContextAlloc ()\n#3 0x19fc58 in pstrdup ()\n#4 0x1331c4 in inv_open ()\n#5 0xb3e18 in lo_open ()\n#6 0x199e38 in fmgr_c ()\n#7 0x19a46c in fmgr ()\n#8 0x13dc44 in HandleFunctionRequest ()\n#9 0x14048c in PostgresMain ()\n#10 0x11161c in DoBackend ()\n#11 0x110e78 in BackendStartup ()\n#12 0x10fd98 in ServerLoop ()\n#13 0x10f6e0 in PostmasterMain ()\n#14 0xbca58 in main ()\n\n######## NEXT EXAMPLE ###############\n\nI stopped/started the postmaster as:\n/opt/pgsql/bin/postmaster -o \"-d 3 -F -o ${PGSQLHOME}/logs/backend.log\n-W 30 -S 16384\" \\\n -i -d 3 -p 5432 -D ${PGDATA} >\n${PGSQLHOME}/logs/error.log 2>&1 &\n\nNow I ran the examples/blobtest, attached to the backend & got the\nfollowing:\n\nvlad: ps -ef | grep po\n root 272 1 0 Apr 27 ? 0:00 /usr/lib/power/powerd\n bpm 815 1 0 11:00:58 pts/2 0:00 /opt/pgsql/bin/postmaster\n-o -F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\n bpm 830 815 0 11:01:37 pts/2 0:00 /opt/pgsql/bin/postmaster\n-o -F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\n bpm 833 1153 0 11:01:45 pts/10 0:00 grep po\nvlad: gdb /opt/pgsql/bin/postmaster 830\nGNU gdb 4.17\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you\nare\nwelcome to change it and/or distribute copies of it under certain\nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for\ndetails.\nThis GDB was configured as \"sparc-sun-solaris2.7\"...\n(no debugging symbols found)...\n\n/opt/pgsql/logs/830: No such file or directory.\nAttaching to program `/opt/pgsql/bin/postmaster', process 830\nReading symbols from /usr/lib/libgen.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libcrypt_i.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libnsl.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libsocket.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libdl.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libm.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libcurses.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libc.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libmp.so.2...(no debugging symbols\nfound)...done.\nReading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...\n---Type <return> to continue, or q <return> to quit---\n(no debugging symbols found)...done.\nReading symbols from /usr/lib/nss_files.so.1...(no debugging symbols\nfound)...\ndone.\nSymbols already loaded for /usr/lib/libgen.so.1\nSymbols already loaded for /usr/lib/libcrypt_i.so.1\nSymbols already loaded for /usr/lib/libnsl.so.1\nSymbols already loaded for /usr/lib/libsocket.so.1\nSymbols already loaded for /usr/lib/libdl.so.1\nSymbols already loaded for /usr/lib/libm.so.1\nSymbols already loaded for /usr/lib/libcurses.so.1\nSymbols already loaded for /usr/lib/libc.so.1\nSymbols already loaded for /usr/lib/libmp.so.2\nSymbols already loaded for\n/usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\nSymbols already loaded for /usr/lib/nss_files.so.1\n0xff195d88 in _sigsuspend ()\n(gdb) c\nContinuing.\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x19f180 in AllocSetAlloc ()\n(gdb) bt\n#0 0x19f180 in AllocSetAlloc ()\n#1 0x19fa2c in GlobalMemoryAlloc ()\n#2 0x19f7b8 in MemoryContextAlloc ()\n#3 0x5b69c in btrescan ()\n#4 0x199e60 in fmgr_c ()\n#5 0x19a46c in fmgr ()\n#6 0x4d3a0 in index_rescan ()\n#7 0x4cc70 in RelationGetIndexScan ()\n#8 0x5b4d0 in btbeginscan ()\n#9 0x199e8c in fmgr_c ()\n#10 0x19a46c in fmgr ()\n#11 0x4d2e4 in index_beginscan ()\n#12 0x133624 in inv_seek ()\n#13 0xb4070 in lo_lseek ()\n#14 0x199e60 in fmgr_c ()\n#15 0x19a46c in fmgr ()\n#16 0x13dc44 in HandleFunctionRequest ()\n#17 0x14048c in PostgresMain ()\n#18 0x11161c in DoBackend ()\n#19 0x110e78 in BackendStartup ()\n#20 0x10fd98 in ServerLoop ()\n#21 0x10f6e0 in PostmasterMain ()\n#22 0xbca58 in main ()\n\n\nI looked at src/backend/utils/mmgr/aset.c to see what was going on in\nAllocSetAlloc, but I am but a mere mortal.\n\nThanks.\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Thu, 29 Apr 1999 11:40:09 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "SIGBUS in AllocSetAlloc & jdbc" }, { "msg_contents": "On Thu, 29 Apr 1999, Brian P Millett wrote:\n\n> Peter, I hope this long and boring message can shed some light on my\n> difficulties getting jdbc & postgres6.5b1(current snapshot) to work\n> with blobs. I have NO problem with text, numeric, etc. Just blobs &\n> the LO interface.\n> \n> I feel that it is a 64 vs 32 bit memory management problem.\n\nAt first glance, the JDBC code looks ok. In fact it is similar to the\nImageViewer example that's included with the source.\n\nDo you get the same problem when using ImageViewer?\n\n> // Getting the value of the item_picture column and\n> // displaying it\n> System.err.println(\"Got oid \"+queryResults.getInt(2));\n> byte itemPictureArray [] = queryResults.getBytes(2);\n> if (itemPictureArray != null) {\n> Image img =\n> Toolkit.getDefaultToolkit().createImage(itemPictureArray);\n> itemPictureCanvas.setImage(img);\n> itemPictureCanvas.repaint();\n> }\n\nUsing an array is perfectly valid, and the driver does support this method\nof getting an Image from a BLOB.\n\nTip: The JDBC spec says you should only read a field once. However, with\nour driver, this is ok.\n\nI'm not sure about the backend, but the JDBC side looks fine.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Thu, 29 Apr 1999 18:44:19 +0100 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SIGBUS in AllocSetAlloc & jdbc" }, { "msg_contents": "Peter T Mount wrote:\n\n> On Thu, 29 Apr 1999, Brian P Millett wrote:\n>\n> > Peter, I hope this long and boring message can shed some light on my\n> > difficulties getting jdbc & postgres6.5b1(current snapshot) to work\n> > with blobs. I have NO problem with text, numeric, etc. Just blobs &\n> > the LO interface.\n> >\n> > I feel that it is a 64 vs 32 bit memory management problem.\n>\n> At first glance, the JDBC code looks ok. In fact it is similar to the\n> ImageViewer example that's included with the source.\n>\n> Do you get the same problem when using ImageViewer?\n\nNo, and yes.\n\nThe ImageViewer will give an error that the backend has lost connection after\nimporting the second image. The first goes in just fine, it is the second\nthat causes the problem. If I stop & restart the ImageViewer, then I can\nimport a second image. BUT I can only view one. When I try to view the\nsecond, I get an error. The back trace of that error is at the bottom. It\nIS the same as the backtrace for the example I sent you. Again, the first\ntime I run ImageViewer, I can import only one image. When I import a second\nimage, I get the backtrace (1). After I stop & start the ImageViewer, there\nis one image in the database. I can import a new image, but when I try to\nview the first image, I get backtrace (2).\n\nBACKTRACE (1):\nhere is the bt:\nvlad: ps -ef | grep po\n root 272 1 0 Apr 27 ? 0:00 /usr/lib/power/powerd\n bpm 815 1 0 11:00:58 pts/2 0:00 /opt/pgsql/bin/postmaster -o\n-F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\n bpm 965 1153 0 13:41:54 pts/10 0:00 grep po\n bpm 961 815 0 13:41:49 pts/2 0:00 /opt/pgsql/bin/postmaster -o\n-F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\nvlad: gdb /opt/pgsql/bin/postmaster 961\nGNU gdb 4.17\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\n\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"sparc-sun-solaris2.7\"...\n(no debugging symbols found)...\n\n/home/bpm/compile_area/cvs_pgsql/pgsql.snapshot/961: No such file or\ndirectory.\nAttaching to program `/opt/pgsql/bin/postmaster', process 961\nReading symbols from /usr/lib/libgen.so.1...(no debugging symbols found)...\ndone.\nReading symbols from /usr/lib/libcrypt_i.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libnsl.so.1...(no debugging symbols found)...\ndone.\nReading symbols from /usr/lib/libsocket.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libdl.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libm.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libcurses.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libc.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libmp.so.2...(no debugging symbols\nfound)...done.\nReading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...\n---Type <return> to continue, or q <return> to quit---\n(no debugging symbols found)...done.\nReading symbols from /usr/lib/nss_files.so.1...(no debugging symbols\nfound)...\ndone.\nSymbols already loaded for /usr/lib/libgen.so.1\nSymbols already loaded for /usr/lib/libcrypt_i.so.1\nSymbols already loaded for /usr/lib/libnsl.so.1\nSymbols already loaded for /usr/lib/libsocket.so.1\nSymbols already loaded for /usr/lib/libdl.so.1\nSymbols already loaded for /usr/lib/libm.so.1\nSymbols already loaded for /usr/lib/libcurses.so.1\nSymbols already loaded for /usr/lib/libc.so.1\nSymbols already loaded for /usr/lib/libmp.so.2\nSymbols already loaded for /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\nSymbols already loaded for /usr/lib/nss_files.so.1\n0xff195d88 in _sigsuspend ()\n(gdb) c\nContinuing.\n\nProgram received signal SIGBUS, Bus error.\n0xff145d70 in t_delete ()\n(gdb) bt\n#0 0xff145d70 in t_delete ()\n#1 0xff145490 in _malloc_unlocked ()\n#2 0xff145314 in malloc ()\n#3 0x19f3a8 in AllocSetAlloc ()\n#4 0x19fe48 in PortalHeapMemoryAlloc ()\n#5 0x19f7b8 in MemoryContextAlloc ()\n#6 0xb28c0 in initStringInfo ()\n#7 0x13d4cc in SendFunctionResult ()\n#8 0x13dcd4 in HandleFunctionRequest ()\n#9 0x14048c in PostgresMain ()\n#10 0x11161c in DoBackend ()\n#11 0x110e78 in BackendStartup ()\n#12 0x10fd98 in ServerLoop ()\n#13 0x10f6e0 in PostmasterMain ()\n#14 0xbca58 in main ()\n\n\nBACKTRACE (2):\nvlad: ps -ef | grep po\n root 272 1 0 Apr 27 ? 0:00 /usr/lib/power/powerd\n bpm 815 1 0 11:00:58 pts/2 0:00 /opt/pgsql/bin/postmaster -o\n-F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\n bpm 983 1153 0 14:08:13 pts/10 0:00 grep po\n bpm 981 815 0 14:08:09 pts/2 0:00 /opt/pgsql/bin/postmaster -o\n-F -o /opt/pgsql/logs/backend.log -W 30 -S 16384 -\nvlad: gdb /opt/pgsql/bin/postmaster 981\nGNU gdb 4.17\nCopyright 1998 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\n\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"sparc-sun-solaris2.7\"...\n(no debugging symbols found)...\n\n/home/bpm/compile_area/cvs_pgsql/pgsql.snapshot/981: No such file or\ndirectory.\nAttaching to program `/opt/pgsql/bin/postmaster', process 981\nReading symbols from /usr/lib/libgen.so.1...(no debugging symbols found)...\ndone.\nReading symbols from /usr/lib/libcrypt_i.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libnsl.so.1...(no debugging symbols found)...\ndone.\nReading symbols from /usr/lib/libsocket.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libdl.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libm.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libcurses.so.1...(no debugging symbols\nfound)...\ndone.\nReading symbols from /usr/lib/libc.so.1...(no debugging symbols\nfound)...done.\nReading symbols from /usr/lib/libmp.so.2...(no debugging symbols\nfound)...done.\nReading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...\n---Type <return> to continue, or q <return> to quit---\n(no debugging symbols found)...done.\nReading symbols from /usr/lib/nss_files.so.1...(no debugging symbols\nfound)...\ndone.\nSymbols already loaded for /usr/lib/libgen.so.1\nSymbols already loaded for /usr/lib/libcrypt_i.so.1\nSymbols already loaded for /usr/lib/libnsl.so.1\nSymbols already loaded for /usr/lib/libsocket.so.1\nSymbols already loaded for /usr/lib/libdl.so.1\nSymbols already loaded for /usr/lib/libm.so.1\nSymbols already loaded for /usr/lib/libcurses.so.1\nSymbols already loaded for /usr/lib/libc.so.1\nSymbols already loaded for /usr/lib/libmp.so.2\nSymbols already loaded for /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\nSymbols already loaded for /usr/lib/nss_files.so.1\n0xff195d88 in _sigsuspend ()\n(gdb) c\nContinuing.\n\nProgram received signal SIGBUS, Bus error.\n0x19f180 in AllocSetAlloc ()\n(gdb) bt\n#0 0x19f180 in AllocSetAlloc ()\n#1 0x19fa2c in GlobalMemoryAlloc ()\n#2 0x19f7b8 in MemoryContextAlloc ()\n#3 0x4cc44 in RelationGetIndexScan ()\n#4 0x5b4d0 in btbeginscan ()\n#5 0x199e8c in fmgr_c ()\n#6 0x19a46c in fmgr ()\n#7 0x4d2e4 in index_beginscan ()\n#8 0x133624 in inv_seek ()\n#9 0x133514 in inv_seek ()\n#10 0xb4070 in lo_lseek ()\n#11 0x199e60 in fmgr_c ()\n#12 0x19a46c in fmgr ()\n#13 0x13dc44 in HandleFunctionRequest ()\n#14 0x14048c in PostgresMain ()\n#15 0x11161c in DoBackend ()\n#16 0x110e78 in BackendStartup ()\n#17 0x10fd98 in ServerLoop ()\n#18 0x10f6e0 in PostmasterMain ()\n#19 0xbca58 in main ()\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Thu, 29 Apr 1999 14:14:49 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SIGBUS in AllocSetAlloc & jdbc" }, { "msg_contents": "> On Thu, 29 Apr 1999, Brian P Millett wrote:\n> \n> > Peter, I hope this long and boring message can shed some light on my\n> > difficulties getting jdbc & postgres6.5b1(current snapshot) to work\n> > with blobs. I have NO problem with text, numeric, etc. Just blobs &\n> > the LO interface.\n> > \n> > I feel that it is a 64 vs 32 bit memory management problem.\n> \n> At first glance, the JDBC code looks ok. In fact it is similar to the\n> ImageViewer example that's included with the source.\n> \n> Do you get the same problem when using ImageViewer?\n> \n> > // Getting the value of the item_picture column and\n> > // displaying it\n> > System.err.println(\"Got oid \"+queryResults.getInt(2));\n> > byte itemPictureArray [] = queryResults.getBytes(2);\n> > if (itemPictureArray != null) {\n> > Image img =\n> > Toolkit.getDefaultToolkit().createImage(itemPictureArray);\n> > itemPictureCanvas.setImage(img);\n> > itemPictureCanvas.repaint();\n> > }\n> \n> Using an array is perfectly valid, and the driver does support this method\n> of getting an Image from a BLOB.\n\nThis morning I started to look into this. First, JDBC driver coming\nwith 6.5b did not compile. The reason was my JDK (JDK 1.1.7 v1 on\nLinuxPPC) returns version string as \"root:10/14/98-13:50\" and\nmakeVersion expected it started with \"1.1\". This was easy to fix. So I\nwent on and tried the ImageViewer sample. It gave me SQL an exception:\n\njava example.ImageViewer jdbc:postgresql:test t-ishii \"\"\nConnecting to Database URL = jdbc:postgresql:test\nException caught.\njava.sql.SQLException: The postgresql.jar file does not contain the correct JDBC classes for this JVM. Try rebuilding.\nException thrown was java.lang.ClassNotFoundException: postgresql.jdbc2.Connection\njava.sql.SQLException: The postgresql.jar file does not contain the correct JDBC classes for this JVM. Try rebuilding.\nException thrown was java.lang.ClassNotFoundException: postgresql.jdbc2.Connection\n\tat postgresql.Driver.connect(Compiled Code)\n\tat java.sql.DriverManager.getConnection(Compiled Code)\n\tat java.sql.DriverManager.getConnection(Compiled Code)\n\tat example.ImageViewer.<init>(Compiled Code)\n\tat example.ImageViewer.main(Compiled Code)\n\nI had no idea how to fix this. I gave up to use the 6.5 JDBC driver. I\nhad to get back to the 6.4 JDBC driver. This time ImageViewer seemed\nto work. I imported 3 images. Worked fine. Then I tried to take a\nglance at the first image. I got SIGSEGV on the backend! It happened in\nAllocSetAlloc () and seems in the same place as Brian P Millett\nmentioned. Next I switched the backend to 6.4.2(+ large object fixes\nbasically same as 6.5). Worked great!\n\nIn summary:\n\n(1) 6.5 ImageViewer + 6.4.2 JDBC driver + 6.5 backend failed\n(2) 6.5 ImageViewer + 6.4.2 JDBC driver + 6.4.2 backend worked\n\nSo I suspect there is something wrong with the 6.5 backend. I'll look\ninto this more.\n\nP.S. Peter, do you have any suggestion to make JDBC driver under JDK\n1.1.7?\n---\nTatsuo Ishii\n", "msg_date": "Sun, 02 May 1999 21:51:20 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "> This morning I started to look into this. First, JDBC driver coming\n> with 6.5b did not compile. The reason was my JDK (JDK 1.1.7 v1 on\n> LinuxPPC) returns version string as \"root:10/14/98-13:50\" and\n> makeVersion expected it started with \"1.1\". This was easy to fix. So I\n> went on and tried the ImageViewer sample. It gave me SQL an exception:\n> \n> java example.ImageViewer jdbc:postgresql:test t-ishii \"\"\n> Connecting to Database URL = jdbc:postgresql:test\n> Exception caught.\n> java.sql.SQLException: The postgresql.jar file does not contain the correct JDBC classes for this JVM. Try rebuilding.\n> Exception thrown was java.lang.ClassNotFoundException: postgresql.jdbc2.Connection\n> java.sql.SQLException: The postgresql.jar file does not contain the correct JDBC classes for this JVM. Try rebuilding.\n> Exception thrown was java.lang.ClassNotFoundException: postgresql.jdbc2.Connection\n> \tat postgresql.Driver.connect(Compiled Code)\n> \tat java.sql.DriverManager.getConnection(Compiled Code)\n> \tat java.sql.DriverManager.getConnection(Compiled Code)\n> \tat example.ImageViewer.<init>(Compiled Code)\n> \tat example.ImageViewer.main(Compiled Code)\n> \n> I had no idea how to fix this. I gave up to use the 6.5 JDBC driver. I\n> had to get back to the 6.4 JDBC driver. This time ImageViewer seemed\n> to work. I imported 3 images. Worked fine. Then I tried to take a\n> glance at the first image. I got SIGSEGV on the backend! It happened in\n> AllocSetAlloc () and seems in the same place as Brian P Millett\n> mentioned. Next I switched the backend to 6.4.2(+ large object fixes\n> basically same as 6.5). Worked great!\n> \n> In summary:\n> \n> (1) 6.5 ImageViewer + 6.4.2 JDBC driver + 6.5 backend failed\n> (2) 6.5 ImageViewer + 6.4.2 JDBC driver + 6.4.2 backend worked\n> \n> So I suspect there is something wrong with the 6.5 backend. I'll look\n> into this more.\n> \n> P.S. Peter, do you have any suggestion to make JDBC driver under JDK\n> 1.1.7?\n\nSo far I couldn't find nothing special with the backend by now. Going\nback to the ImageViewer, I think I found possible problem with it. In\nmy understanding, every lo call should be in single transaction block. \nBut ImageViwer seems does not give any \"begin\" or \"end\" SQL commands.\nI made a small modifications(see below patches) to the ImageViewer and\nnow it starts to work again with 6.5 backend! I suspect that\ndifference of palloc() code between 6.4.2 and 6.5 made the problem\nopened up.\n---\nTatsuo Ishii\n\n*** ImageViewer.java~\tMon Oct 12 11:45:45 1998\n--- ImageViewer.java\tSun May 2 23:16:27 1999\n***************\n*** 390,395 ****\n--- 390,396 ----\n {\n try {\n System.out.println(\"Selecting oid for \"+name);\n+ stat.executeUpdate(\"begin\");\n ResultSet rs = stat.executeQuery(\"select imgoid from images where imgname='\"+name+\"'\");\n if(rs!=null) {\n \t// Even though there should only be one image, we still have to\n***************\n*** 402,407 ****\n--- 403,409 ----\n \t}\n }\n rs.close();\n+ stat.executeUpdate(\"end\");\n } catch(SQLException ex) {\n label.setText(ex.toString());\n }\n", "msg_date": "Sun, 02 May 1999 23:52:54 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> So far I couldn't find nothing special with the backend by now. Going\n> back to the ImageViewer, I think I found possible problem with it. In\n> my understanding, every lo call should be in single transaction block. \n> But ImageViwer seems does not give any \"begin\" or \"end\" SQL commands.\n> I made a small modifications(see below patches) to the ImageViewer and\n> now it starts to work again with 6.5 backend!\n\nHmm. The documentation does say somewhere that LO object handles are\nonly good within a transaction ... so it's amazing this worked reliably\nunder 6.4.x.\n\nIs there any way we could improve the backend's LO functions to defend\nagainst this sort of misuse, rather than blindly accepting a stale\nfilehandle?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 May 1999 13:12:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "[ I'm cc'ing this to java-linux as this seems to be a problem with the\nLinux PPC port - peter ]\n\nOn Sun, 2 May 1999, Tatsuo Ishii wrote:\n\n[snip]\n\n> This morning I started to look into this. First, JDBC driver coming\n> with 6.5b did not compile. The reason was my JDK (JDK 1.1.7 v1 on\n> LinuxPPC) returns version string as \"root:10/14/98-13:50\" and\n> makeVersion expected it started with \"1.1\". This was easy to fix. So I\n> went on and tried the ImageViewer sample. It gave me SQL an exception:\n\n[snip]\n\n> P.S. Peter, do you have any suggestion to make JDBC driver under JDK\n> 1.1.7?\n\nAh, the first problem I've seen with the JVM version detection. the\npostgresql.Driver class does the same thing as makeVersion, and checks the\nversion string, and when it sees that it starts with 1.1 it sets the base\npackage to postgresql.j1 otherwise it sets it to postgresql.j2.\n\nThe exceptions you are seeing is the JVM complaining it cannot find the\nJDK1.2 classes.\n\nAs how to fix this, this is tricky. It seems that the version string isn't\nthat helpful. The JDK documentation says it returns the version of the\nJVM, but there seems to be no set format for this. ie, with your version,\nit seems to give a date and time that VM was built.\n\nJava-Linux: Is there a way to ensure that the version string is similar to\nthe ones that Sun produces? At least having the JVM version first, then\nreform after that?\n\nThe PostgreSQL JDBC driver is developed and tested under Linux (intel)\nusing 1.1.6 and 1.2b1 JVM's (both blackdown). I use Sun's Win32 1.2 JVM\nfor testing. The current driver works fine on all three JVM's, so it seems\nto be the PPC port that has this problem.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Mon, 3 May 1999 11:57:12 +0100 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "On Sun, 2 May 1999, Tatsuo Ishii wrote:\n\n> So far I couldn't find nothing special with the backend by now. Going\n> back to the ImageViewer, I think I found possible problem with it. In\n> my understanding, every lo call should be in single transaction block. \n> But ImageViwer seems does not give any \"begin\" or \"end\" SQL commands.\n> I made a small modifications(see below patches) to the ImageViewer and\n> now it starts to work again with 6.5 backend! I suspect that\n> difference of palloc() code between 6.4.2 and 6.5 made the problem\n> opened up.\n\nThis is true, and is probably why it was opened up.\n\nHowever, with JDBC, you should never issue begin and end statements, as\nthis can cause the driver to become confused.\n\nThe reason is autoCommit. In jdbc, to start a transaction, you should call\nthe setAutoCommit() method in Connection to say if you want autocommit\n(true), or transactions (false). The standard default is true.\n\n> ---\n> Tatsuo Ishii\n> \n> *** ImageViewer.java~\tMon Oct 12 11:45:45 1998\n> --- ImageViewer.java\tSun May 2 23:16:27 1999\n> ***************\n> *** 390,395 ****\n> --- 390,396 ----\n> {\n> try {\n> System.out.println(\"Selecting oid for \"+name);\n> + stat.executeUpdate(\"begin\");\n> ResultSet rs = stat.executeQuery(\"select imgoid from images where imgname='\"+name+\"'\");\n> if(rs!=null) {\n> \t// Even though there should only be one image, we still have to\n> ***************\n> *** 402,407 ****\n> --- 403,409 ----\n> \t}\n> }\n> rs.close();\n> + stat.executeUpdate(\"end\");\n> } catch(SQLException ex) {\n> label.setText(ex.toString());\n> }\n> \n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Mon, 3 May 1999 12:01:40 +0100 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "> Hmm. The documentation does say somewhere that LO object handles are\n> only good within a transaction ... so it's amazing this worked reliably\n> under 6.4.x.\n> \n> Is there any way we could improve the backend's LO functions to defend\n> against this sort of misuse, rather than blindly accepting a stale\n> filehandle?\n\nIt should not be very difficult. We could explicitly close LO\nfilehandles on commits.\n\nBut I'm now not confident on this. From comments in be-fsstubs.c:\n\n>Builtin functions for open/close/read/write operations on large objects.\n>These functions operate in the current portal variable context, which\n>means the large object descriptors hang around between transactions and\n>are not deallocated until explicitly closed, or until the portal is\n>closed.\n\nIf above is true, LO filehandles should be able to survive between\ntransactions.\n\nFollowing data are included in them. My question is: Can these data\nsurvive between transactions? I guess not.\n\ntypedef struct LargeObjectDesc\n{\n\tRelation\theap_r;\t\t/* heap relation */\n\tRelation\tindex_r;\t/* index relation on seqno attribute */\n\tIndexScanDesc iscan;\t\t/* index scan we're using */\n\tTupleDesc\thdesc;\t\t/* heap relation tuple desc */\n\tTupleDesc\tidesc;\t\t/* index relation tuple desc */\n\n\t[snip]\n", "msg_date": "Mon, 03 May 1999 23:08:54 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "> > Hmm. The documentation does say somewhere that LO object handles are\n> > only good within a transaction ... so it's amazing this worked reliably\n> > under 6.4.x.\n> > \n> > Is there any way we could improve the backend's LO functions to defend\n> > against this sort of misuse, rather than blindly accepting a stale\n> > filehandle?\n> \n> It should not be very difficult. We could explicitly close LO\n> filehandles on commits.\n> \n> But I'm now not confident on this. From comments in be-fsstubs.c:\n> \n> >Builtin functions for open/close/read/write operations on large objects.\n> >These functions operate in the current portal variable context, which\n> >means the large object descriptors hang around between transactions and\n> >are not deallocated until explicitly closed, or until the portal is\n> >closed.\n> \n> If above is true, LO filehandles should be able to survive between\n> transactions.\n> \n> Following data are included in them. My question is: Can these data\n> survive between transactions? I guess not.\n> \n> typedef struct LargeObjectDesc\n> {\n> \tRelation\theap_r;\t\t/* heap relation */\n> \tRelation\tindex_r;\t/* index relation on seqno attribute */\n> \tIndexScanDesc iscan;\t\t/* index scan we're using */\n> \tTupleDesc\thdesc;\t\t/* heap relation tuple desc */\n> \tTupleDesc\tidesc;\t\t/* index relation tuple desc */\n> \n> \t[snip]\n\nThe answer was yes. Since these data are allocated in\nGlobalMemoryContext, they could survive after commit. So next question\nis why the backend crashes if LO calls are not in a transaction?\n\nAt the commit time _lo_commit() is called under GlobalMemoryContext to\ndestroy IndexScanDesc. So it seems the IndexScanDesc is supposed to be\ncreated under GlobalMemoryContext. But actually lo_read/lo_write,\nthey might create IndexScanDesc also, may not be called under the\ncontext since no context switching is made with them(I don't know why\nsince other LO calls make context switching). As a result it's\npossible that IndexScanDesc might be freed under a wrong context. Too\nbad.\n\nThis would not happen if lo_seek is done (it's executed under\nGlobalMemoryContext) then lo_read/lo_write gets called(they reuse\nIndexScanDesc created by inv_seek) *AND* no committing is done before\nlo_read/lo_write. This is why we do not observe the backend crash with\nImageViewer running in a transaction.\n\nBut I must say other apps may not be as lucky as ImageViewer since\nit's not always the case that lo_seek is called prior to\nlo_read/lo_write.\n\n[ BTW, ImageViewer seems to make calls to following set of LOs *twice*\nto display an image. Why?\n\nlo_open\nlo_tell\nlo_lseek\nlo_lseek\nlo_read\nlo_close\n]\n\nPossible solutions might be:\n\n(1) do a context switching in lo_read/lo_write\n\n(2) ask apps not to make LO calls between transactions\n\n(3) close LOs fd at commit\n\n(2) is the current situation but not very confortable for us. Also for\na certain app this is not a solution as I mentioned above. (3) seems\nreasonable but people might be surprised to find their existing apps\nwon't run any more. Moreover, changings might not be trivial and it\nmake me nervous since we don't have enough time before 6.5 is\nout. With (1) modifications would be minimum, and we can keep the\nbackward compatibility for apps. So my conclusion is that (1) is the\nbest. If there's no objection, I will commit the change for (1).\n---\nTatsuo Ishii\n\n", "msg_date": "Sun, 09 May 1999 12:12:18 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "> The answer was yes. Since these data are allocated in\n> GlobalMemoryContext, they could survive after commit. So next question\n> is why the backend crashes if LO calls are not in a transaction?\n> \n> At the commit time _lo_commit() is called under GlobalMemoryContext to\n> destroy IndexScanDesc. So it seems the IndexScanDesc is supposed to be\n> created under GlobalMemoryContext. But actually lo_read/lo_write,\n> they might create IndexScanDesc also, may not be called under the\n> context since no context switching is made with them(I don't know why\n> since other LO calls make context switching). As a result it's\n> possible that IndexScanDesc might be freed under a wrong context. Too\n> bad.\n> \n> This would not happen if lo_seek is done (it's executed under\n> GlobalMemoryContext) then lo_read/lo_write gets called(they reuse\n> IndexScanDesc created by inv_seek) *AND* no committing is done before\n> lo_read/lo_write. This is why we do not observe the backend crash with\n> ImageViewer running in a transaction.\n> \n> But I must say other apps may not be as lucky as ImageViewer since\n> it's not always the case that lo_seek is called prior to\n> lo_read/lo_write.\n> \n> [ BTW, ImageViewer seems to make calls to following set of LOs *twice*\n> to display an image. Why?\n> \n> lo_open\n> lo_tell\n> lo_lseek\n> lo_lseek\n> lo_read\n> lo_close\n> ]\n> \n> Possible solutions might be:\n> \n> (1) do a context switching in lo_read/lo_write\n> \n> (2) ask apps not to make LO calls between transactions\n> \n> (3) close LOs fd at commit\n> \n> (2) is the current situation but not very confortable for us. Also for\n> a certain app this is not a solution as I mentioned above. (3) seems\n> reasonable but people might be surprised to find their existing apps\n> won't run any more. Moreover, changings might not be trivial and it\n> make me nervous since we don't have enough time before 6.5 is\n> out. With (1) modifications would be minimum, and we can keep the\n> backward compatibility for apps. So my conclusion is that (1) is the\n> best. If there's no objection, I will commit the change for (1).\n\nThis is all clearly part of the large object mess that we were in before\nyou fixed large objects.\n\nDo whatever you think is best for the long term. Large objects were so\nterribly fragile in previous releases, we might as well advertise them\nagain as now working 100% of the time. If the new behaviour changes\nexisting apps, that is OK if the new behaviour is superior. They will\nbe thrilled to change their apps if the large object system now works\nbetter.\n\nIt would be different if the old system worked properly.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 May 1999 00:04:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc" }, { "msg_contents": "> This is all clearly part of the large object mess that we were in before\n> you fixed large objects.\n> \n> Do whatever you think is best for the long term. Large objects were so\n> terribly fragile in previous releases, we might as well advertise them\n> again as now working 100% of the time. If the new behaviour changes\n> existing apps, that is OK if the new behaviour is superior. They will\n> be thrilled to change their apps if the large object system now works\n> better.\n> \n> It would be different if the old system worked properly.\n\nI have changed lo_read/lo_write so that they run under same memory\ncontext as other LO calls.\n---\nTatsuo Ishii\n", "msg_date": "Sun, 09 May 1999 23:56:26 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> At the commit time _lo_commit() is called under GlobalMemoryContext to\n> destroy IndexScanDesc. So it seems the IndexScanDesc is supposed to be\n> created under GlobalMemoryContext. But actually lo_read/lo_write,\n> they might create IndexScanDesc also, may not be called under the\n> context since no context switching is made with them(I don't know why\n> since other LO calls make context switching).\n\nI was noticing that yesterday (I modified be-fsstubs.c to use properly\nallocated filehandles in lo_import and lo_export, and added extra\nchecking for a valid LO handle --- the old code would coredump if\nhanded a -1 handle, which is not too cool considering that's what it\nwould get if an app didn't bother to check for lo_open failure...).\nIt seemed odd that lo_read and lo_write didn't switch contexts like the\nother entry points did. But I didn't know enough to risk changing it.\n\n> Possible solutions might be:\n> (1) do a context switching in lo_read/lo_write\n\nI agree with this, but I worry a little bit about memory leakage,\nbecause anything allocated in GlobalMemoryContext is not going to get\ncleaned up automatically. If lo_read/lo_write call any code that is\nsloppy about pfree'ing everything it palloc's, then you'd have a\nlong-term leakage that would eventually make the backend run out of\nmemory. But it'd be easy enough to test for that, if you have a test\napp that can run the backend through a lot of LO calls.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 May 1999 11:53:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "> > Possible solutions might be:\n> > (1) do a context switching in lo_read/lo_write\n> \n> I agree with this, but I worry a little bit about memory leakage,\n> because anything allocated in GlobalMemoryContext is not going to get\n> cleaned up automatically. If lo_read/lo_write call any code that is\n> sloppy about pfree'ing everything it palloc's, then you'd have a\n> long-term leakage that would eventually make the backend run out of\n> memory. But it'd be easy enough to test for that, if you have a test\n> app that can run the backend through a lot of LO calls.\n\nYes, I thought about that too. Maybe we could destroy the\nGlobalMemoryContext in lo_close() if no LO descriptor exists any more.\nOf course this would not prevent the memory leakage if a user forget\nto call lo_close, but it's of the user's responsibility anyway.\n---\nTatsuo Ishii\n", "msg_date": "Mon, 10 May 1999 09:58:06 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " }, { "msg_contents": "Tatsuo Ishii wrote:\n\n> > This is all clearly part of the large object mess that we were in before\n> > you fixed large objects.\n> >\n> > Do whatever you think is best for the long term. Large objects were so\n> > terribly fragile in previous releases, we might as well advertise them\n> > again as now working 100% of the time. If the new behaviour changes\n> > existing apps, that is OK if the new behaviour is superior. They will\n> > be thrilled to change their apps if the large object system now works\n> > better.\n> >\n> > It would be different if the old system worked properly.\n>\n> I have changed lo_read/lo_write so that they run under same memory\n> context as other LO calls.\n\nWell, for the first time (for me) the blobtest.java now works correctly.\n\nThat would be Solaris 2.7, cvs checkout 19990511, jdk1.2.1_02.\n\nThank you all for the great work.\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Tue, 11 May 1999 13:03:11 -0500", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc" } ]
[ { "msg_contents": "The following web site contains a set mathematical functions that looks\ninteresting:\n\nhttp://www.boic.com/numintro.htm <http://www.boic.com/numintro.htm> \n\nMaybe you should consider using this to implement the numeric and decimal\ndata types rather than writing or re-writing from scratch. The license is\nonly $99. If it works it could save Jan a lot of time. I would be more than\nwilling to purchase a license for PostgreSQL if this will help. Well, I\nthought it was interesting.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\[email protected] [SMTP:[email protected]]\n\tSent:\tThursday, April 29, 1999 10:12 AM\n\tTo:\[email protected]\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] v6.5 Release Date ...\n\n\tMarc G. Fournier wrote:\n\n\t>\n\t> June 1st. Plan'd release date for v6.5.\n\n\t The latest discussions on rules discovered some problems with\n\t the rewrite system.\n\n\t It seems that varlevelsup in rule actions subselects get lost\n\t somehow. And while at it I saw some out of range varno's\n\t generated for rules group by entries.\n\n\t I really hope to find the bugs until June 1st, but if not\n\t this is IMHO a show stopper - no?\n\n\n\tJan\n\n\t--\n\n\t\n#======================================================================#\n\t# It's easier to get forgiveness for being wrong than for being\nright. #\n\t# Let's break this rule - forgive me.\n#\n\t#======================================== [email protected] (Jan\nWieck) #\n\n\n\t\n", "msg_date": "Thu, 29 Apr 1999 11:50:00 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Numeric data types (formerly \"v6.5 Release Date ...\")" } ]
[ { "msg_contents": "I posted the following message in [GENERAL] and still haven't been able to\nresolve the problem.\n\nThe file permissions are set to postgres and postgres is authorized to\naccess /usr/local/pgsql/lib.\n\nWhat I am trying to do is insert a row into a DB but, I want to be able to\ncheck each row for a specified column prior to insert to see if there is a\nduplicate column.\nIf there is I want to be able to abort or simply skip that insert. If\nthere are any easier ways to do this please let me know.\n\nI'm running Postgres 6.4.2 on Linux.\n\nAny ideas would be greatly appreciated.\n\nPlease email any resonses to the above email address as I don't belong to\n[HACKERS].\n\nThanks in advance.\n\nAndy\n\n\n\n> I got the following error after trying to copy the example at:\n> http://www.postgresql.org/mhonarc/pgsql-sql/1999-04/msg00076.html\n> \n> ----------------------------------------------------------\n> test=> select a(pin,first_name) from ibs_subscriber ;\n> ERROR: stat failed on file ${exec_prefix}/lib/plpgsql.so\n> ----------------------------------------------------------\n> \n> The file plpgsql.so does exist in /usr/local/pgsql/lib.\n> Any ideas?\n\n\n", "msg_date": "Thu, 29 Apr 1999 13:27:31 -0500 (CDT)", "msg_from": "Andy Lewis <[email protected]>", "msg_from_op": true, "msg_subject": "PLpgSQL Stat Problem" }, { "msg_contents": "Andy Lewis <[email protected]> writes:\n>> I got the following error after trying to copy the example at:\n>> http://www.postgresql.org/mhonarc/pgsql-sql/1999-04/msg00076.html\n>>\n>> test=> select a(pin,first_name) from ibs_subscriber ;\n>> ERROR: stat failed on file ${exec_prefix}/lib/plpgsql.so\n\nLooks like the system is trying to use the literal filename\n\"${exec_prefix}/lib/plpgsql.so\", which of course is not right.\n\nI'm guessing that when you copied the CREATE FUNCTION out of the\nexample, you just cut and pasted without doing the shell variable\nexpansion that was supposed to happen to replace ${exec_prefix}\nwith /usr/local/pgsql ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 1999 18:39:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PLpgSQL Stat Problem " }, { "msg_contents": "Solved! Thanks.\n\nI know this is dumb but, I ran mklang.sql on a test DB and in that file it\nreferences: ${exec_prefix}\n\nWithout thinking to remove the DB and start from scratch.\n\nAnyway thanks to all.\n\nAndy\n\nOn Thu, 29 Apr 1999, Tom Lane wrote:\n\n> Andy Lewis <[email protected]> writes:\n> >> I got the following error after trying to copy the example at:\n> >> http://www.postgresql.org/mhonarc/pgsql-sql/1999-04/msg00076.html\n> >>\n> >> test=> select a(pin,first_name) from ibs_subscriber ;\n> >> ERROR: stat failed on file ${exec_prefix}/lib/plpgsql.so\n> \n> Looks like the system is trying to use the literal filename\n> \"${exec_prefix}/lib/plpgsql.so\", which of course is not right.\n> \n> I'm guessing that when you copied the CREATE FUNCTION out of the\n> example, you just cut and pasted without doing the shell variable\n> expansion that was supposed to happen to replace ${exec_prefix}\n> with /usr/local/pgsql ...\n> \n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Thu, 29 Apr 1999 18:23:35 -0500 (CDT)", "msg_from": "Andy Lewis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PLpgSQL Stat Problem " } ]
[ { "msg_contents": "The following worked with version 6.5 before 4/5/99 but now fails (I pull\nnew 6.5 source last night):\n\ninsert into si_tmpVerifyAccountBalances select invoiceid+3, memberid, 1,\nTotShippingHandling from InvoiceLineDetails where TotShippingHandling <> 0\nand InvoiceLinesID <= 100 group by invoiceid+3, memberid,\nTotShippingHandling;\nERROR: INSERT has more expressions than target columns\n\nThe following works even though the select list does not match the table\nbeing inserted into (I eliminated a column, the literal 1):\n\ninsert into si_tmpVerifyAccountBalances select invoiceid+3, memberid,\nTotShippingHandling from InvoiceLineDetails where TotShippingHandling <> 0\nand InvoiceLinesID <= 100 group by invoiceid+3, memberid,\nTotShippingHandling;\nINSERT 0 0\n\nThe about statement should have inserted a few thousand records.\n\nThe following works (this has an aggregation function while the other insert\nstatements don't) :\n\ninsert into si_tmpVerifyAccountBalances select 2, memberid, categoriesid,\n1::numeric * sum(InvAmount) from InvoiceLineDetails group by memberid,\ncategoriesid;\n\nHere is a description of the table:\n\n\\d si_tmpVerifyAccountBalances\nTable = si_tmpverifyaccountbalances\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| type | int4 not null |\n4 |\n| memberid | int4 not null |\n4 |\n| categoriesid | int4 not null |\n4 |\n| amount | numeric |\nvar |\n+----------------------------------+----------------------------------+-----\n--+\nIndex: si_tmpverifyaccountbalances_pke\n\nInvoiceLineDetails is a view but I have also this with similar problems when\nusing a physical table. Is a hidden column finding its way into the select\nlist? If is use a group by, do I need to have an aggregation function? Any\none work on portions of the code recently (last 2-3 weeks) that could be\ncausing this condition? Any help would be greatly appreciated.\n\nThanks, Michael\n\n\n", "msg_date": "Thu, 29 Apr 1999 15:46:11 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with insert into select from using aggregation" } ]
[ { "msg_contents": "How does this work when using Access97 or some other ODBC client instead of\npsql? Psql worked great, the problem existed only when trying to access my\ndatabase with an ODBC client.\n\n\t-----Original Message-----\n\tFrom:\tTom Lane [SMTP:[email protected]]\n\tSent:\tThursday, April 29, 1999 4:34 PM\n\tTo:\tMassimo Dal Zotto\n\tCc:\[email protected]\n\tSubject:\tRe: [HACKERS] How do I get the backend server into\ngdb? \n\n\tMassimo Dal Zotto <[email protected]> writes:\n\t> The -W option is passed to the backend which sleeps 15 seconds\nbefore doing\n\t> any work. In the meantime you have the time to do a ps, find the\nbackend pid\n\t> and attach gdb to the process.\n\t> Obviously you can't do that in a production environment because it\nadda a\n\t> fixed delay for each connection which will make your users very\nangry.\n\n\tSince it's a -o option, I see no need to force it to be used on\nevery\n\tconnection. Instead start psql with environment variable\n\t\tPGOPTIONS=\"-W 15\"\n\tor whatever you need for the particular session. The PGOPTIONS are\nsent\n\tin the connection request and then catenated to whatever the\npostmaster\n\tmight have in its -o switch.\n\n\t(BTW, it might be a good idea to go through the backend command-line\n\tswitches carefully and see if any of them could be security holes.\n\tI'm feeling paranoid because of Matthias Schmitt's unresolved\nreport...)\n\n\t\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Apr 1999 18:06:49 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] How do I get the backend server into gdb? " } ]
[ { "msg_contents": "It seems that postgres does not accept -T option. Checking the source,\nI found that T is not given to getopt() while there is a code to parse\nthe -T option. Is there any reason for this?\n---\nTatsuo Ishii\n\n", "msg_date": "Fri, 30 Apr 1999 10:03:35 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "-T option ignored?" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> It seems that postgres does not accept -T option. Checking the source,\n> I found that T is not given to getopt() while there is a code to parse\n> the -T option. Is there any reason for this?\n\nLooks like a garden-variety oversight from here. While you're fixing\nit, I suggest making the order of the getopt() list match the order of\nthe entries in the big switch statement; the postmaster does it that\nway, and that made it a whole lot easier to see that things matched...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 1999 09:56:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] -T option ignored? " } ]
[ { "msg_contents": "gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -ggdb3 -I../.. -c lock.c -o lock.o\nlock.c: In function `LockResolveConflicts':\nlock.c:832: `lockQueue' undeclared (first use this function)\nlock.c:832: (Each undeclared identifier is reported only once\nlock.c:832: for each function it appears in.)\ngmake: *** [lock.o] Error 1\n\nVadim\n", "msg_date": "Fri, 30 Apr 1999 13:43:07 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "can't compile" }, { "msg_contents": "> gcc -I../../../include -I../../../backend -O2 -m486 -pipe -Wall -Wmissing-prototypes -ggdb3 -I../.. -c lock.c -o lock.o\n> lock.c: In function `LockResolveConflicts':\n> lock.c:832: `lockQueue' undeclared (first use this function)\n> lock.c:832: (Each undeclared identifier is reported only once\n> lock.c:832: for each function it appears in.)\n> gmake: *** [lock.o] Error 1\n> \n> Vadim\n> \n> \n\nFixed. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 12:22:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] can't compile" } ]
[ { "msg_contents": "\nI like the logo. Every open source project needs a mascot.\n\nIf the elephant is yet nameless, might I suggest:\n\n\n\t\t Pasquale the Pachyderm?\n\n\nHe might have been descended of one that survived Hanibal's drive over\nthe Pyrinnes :-).\n\nccb\n", "msg_date": "Fri, 30 Apr 1999 08:32:57 -0400", "msg_from": "\"Charles C. Bennett, Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL Webpage " }, { "msg_contents": "On Fri, 30 Apr 1999, Charles C. Bennett, Jr. wrote:\n\n> \n> I like the logo. Every open source project needs a mascot.\n> \n> If the elephant is yet nameless, might I suggest:\n> \n> \n> \t\t Pasquale the Pachyderm?\n\nI like it...others? Much better then 'Morris the Mammoth' :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Apr 1999 13:12:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL Webpage " } ]
[ { "msg_contents": "\nMorning all...\n\n\tI'm trying to see if I can improve mail propogation for the\nmailing lists, and one way of doing it is to setup 'relay'\nsites...basically, using a software package called 'tlb', I can set up the\nmailing lists so that all mail destined for a certain 'regex' expression\n(ie. *.de) can be routed through a seperate host.\n\n\tThe idea is that, with, for instance, a 'relay site' in Germany,\nif we route all .de mail through it, there is one message that has to go\nacross the pond, and then it gets fan'd out over there...\n\n\tRequirements are simple. Enough disk space in your mail spool to\nbe able to handle this...currently, for all the email that goes through\nthe server, we've currently got ~3.7Meg spool'd, so not much space...and\npermission on your server to relay through you (for those using anti-spam\nfilters, such as we are)...\n\n\tIf you are willing to do this, please contact me directly with the\nhost name to transfer through...\n\n\tThe other thing that we have going is a mail<->news\ngateway...there are a few sites that are pulling news from us so that they\nhave it local, and a couple are even running \"public\" sites for that\nparticular hierarchy...\n\n\tIf anyone is interested in cross-feeding *just* that hierarchy,\nplease also contact me...\n\n\tVince, can you add points about both of these to the web site?\nI'll send you a list of those news servers that we are talking to soon, so\nthat they can be listed/acknowledged the same as the WWW/FTP mirrors...\n\n\tFinally, WWW/FTP mirror sites...recently, we implemented\n'controls' on ftp access to the files under ftp.postgresql.org, such that\nit is not possible to use 'mirror' to mirror them. All mirror sites now\nmust use rsync (http://rsync.samba.org) to do the mirroring, which allows\nus tighter control over what goes out. (ie. OS specific db files) We're\ngoing to be starting to clean up the mirror lists based on the rsync logs\nthat are maintained, to ensure that there isn't as much lag between sites\nas there currently is (in some cases)...\n\n\tIf you are mirroring anything, please make sure that you are\nregistered on the [email protected] mailing list...after this\nemail, I won't be sending out another one similar to anything but that\nlist...\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 30 Apr 1999 12:00:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Mailing list 'relay' sites ... News servers ... WWW/FTP Mirrors" } ]
[ { "msg_contents": "Hi,\n\nI have some patches for the current snapshot (1999-04-28):\n\n\n1)\tan old patch I had already posted and which has never applied.\n\n*** src/backend/tcop/postgres.c.orig\tTue Mar 23 09:03:08 1999\n--- src/backend/tcop/postgres.c\tThu Apr 1 14:42:46 1999\n***************\n*** 925,930 ****\n--- 925,931 ----\n \tfprintf(stderr, \"\\t-e \\t\\tturn on European date format\\n\");\n \tfprintf(stderr, \"\\t-o file\\t\\tsend stdout and stderr to given filename \\n\");\n \tfprintf(stderr, \"\\t-s \\t\\tshow stats after each query\\n\");\n+ \tfprintf(stderr, \"\\t-T options\\tspecify pg_options\\n\");\n \tfprintf(stderr, \"\\t-v version\\tset protocol version being used by frontend\\n\");\n \tfprintf(stderr, \"\\t-W \\t\\twait N seconds to allow attach from a debugger\\n\");\n }\n***************\n*** 1018,1024 ****\n \toptind = 1;\t\t\t\t\t/* reset after postmaster usage */\n \n \twhile ((flag = getopt(argc, argv,\n! \t\t\t\t\t\t \"A:B:CD:d:Eef:iK:Lm:MNOo:P:pQS:st:v:x:FW:\"))\n \t\t != EOF)\n \t\tswitch (flag)\n \t\t{\n--- 1019,1025 ----\n \toptind = 1;\t\t\t\t\t/* reset after postmaster usage */\n \n \twhile ((flag = getopt(argc, argv,\n! \t\t\t\t\t\t \"A:B:CD:d:EeFf:iK:Lm:MNOo:P:pQS:sT:t:v:x:W:\"))\n \t\t != EOF)\n \t\tswitch (flag)\n \t\t{\n\n\n2)\ta compilation error in dt.c, probably a wrong cut-and-paste.\n\n*** src/backend/utils/adt/dt.c.orig\tMon Apr 26 09:00:43 1999\n--- src/backend/utils/adt/dt.c\tThu Apr 29 14:04:16 1999\n***************\n*** 3069,3079 ****\n \t\t\t\t*tzp = -(tm->tm_gmtoff);\t/* tm_gmtoff is Sun/DEC-ism */\n #elif defined(HAVE_INT_TIMEZONE)\n #ifdef __CYGWIN__\n! *tzp = ((tm->tm_isdst > 0) ? (_timezone - 3600) : _timez\n! one);\n #else\n! *tzp = ((tm->tm_isdst > 0) ? (timezone - 3600) : timezon\n! e);\n #endif\n #else\n #error USE_POSIX_TIME is defined but neither HAVE_TM_ZONE or HAVE_INT_TIMEZONE are defined\n--- 3069,3077 ----\n \t\t\t\t*tzp = -(tm->tm_gmtoff);\t/* tm_gmtoff is Sun/DEC-ism */\n #elif defined(HAVE_INT_TIMEZONE)\n #ifdef __CYGWIN__\n! *tzp = ((tm->tm_isdst > 0) ? (_timezone - 3600) : _timezone);\n #else\n! *tzp = ((tm->tm_isdst > 0) ? (timezone - 3600) : timezone);\n #endif\n #else\n #error USE_POSIX_TIME is defined but neither HAVE_TM_ZONE or HAVE_INT_TIMEZONE are defined\n\n\n\n3)\tlibpgtcl has problems with notify. It seems a bug in the be/fe\n\tprotocol. Libpgtcl prints the following message and the program\n\thangs forever:\n\n\t\tunexpected character Z following 'I'\n\n\ttcpdump shows that the backend sends a packet with 'IZ' instead of\n\t'I\\0' and then 'Z' as expected:\n\n\t11:04:02.368669 localhost.5432 > localhost.1169: P 95281:95283(2) ack 985 win 32736 (DF)\n\t4500 002a 1166 4000 4006 2b66 7f00 0001 *E..*.f@.@.+f....*\n\t7f00 0001 1538 0491 d11a c949 0d6b 538b *.....8.....I.kS.*\n\t5018 7fe0 d369 0000 495a *P....i..IZ *\n\n\tMy patch fixes the problem but maybe there is a better way.\n\n*** src/backend/tcop/dest.c.orig\tMon Apr 26 09:00:42 1999\n--- src/backend/tcop/dest.c\tFri Apr 30 11:29:39 1999\n***************\n*** 336,342 ****\n \t\t\t *\t\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\", 1);\n \t\t\tbreak;\n \n \t\tcase Local:\n--- 336,342 ----\n \t\t\t *\t\ttell the fe that we saw an empty query string\n \t\t\t * ----------------\n \t\t\t */\n! \t\t\tpq_putbytes(\"I\\0\", 2);\n \t\t\tbreak;\n \n \t\tcase Local:\n\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Fri, 30 Apr 1999 17:52:28 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": true, "msg_subject": "patches for 6.5.0" }, { "msg_contents": "Massimo Dal Zotto <[email protected]> writes:\n> I have some patches for the current snapshot (1999-04-28):\n\n(1) and (2) applied. (3) was fixed several days ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 13:19:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] patches for 6.5.0 " } ]
[ { "msg_contents": "Last night I was looking into optimizer misbehavior on the sample query\n\nexplain select * from pg_class, pg_description\nwhere pg_class.oid = pg_description.objoid;\n\nAs of yesterday the system was generating\n\nHash Join (cost=86.59 rows=1007 width=101)\n -> Seq Scan on pg_description (cost=41.23 rows=1007 width=16)\n -> Hash (cost=0.00 rows=0 width=0)\n -> Index Scan using pg_class_oid_index on pg_class (cost=5.57 rows=138 width=85)\n\nwhich was pretty stupid; why use an index scan to load the hashtable?\nThe reason was that the optimizer was actually estimating the index scan\nto be cheaper than a sequential scan (cost of sequential scan was\nfigured at 6.55). When I poked into this, I found that costsize.c\nwas being fed a size of zero for pg_class_oid_index, and was generating\na bogus cost for the index scan because of it.\n\nI changed costsize.c to ensure that cost_index with a selectivity of 1\nwill always return a larger value than cost_seqscan does with the same\nrelation-size stats, regardless of what it's told about the index size.\nThis fixes the immediate problem, but it's still bad that costsize is\ngetting a bogus index size value; the cost estimates won't be very\naccurate. And considering that there are reasonable stats for \npg_class_oid_index in pg_class, you'd sort of expect those numbers to\nget passed to the optimizer.\n\nAs near as I can tell, the bogus data is the fault of the relation\ncache. Info about pg_class_oid_index and a couple of other indexes on\nsystem relations is preloaded into the relcache and locked there on\nstartup --- and it is *not* coming from pg_class, but from an\ninitialization file that evidently was made when these system tables\nwere empty.\n\nBottom line is that optimization estimates that involve these critical\nsystem indexes will be wrong. That's not a show-stopper, but it seems\nto me that it must be costing us performance somewhere along the line.\nI'd like to see if it can be fixed.\n\nDoes anyone understand:\n\n(a) why does the relcache need an initialization file for the system\nindex cache entries in the first place? If I'm reading the code\ncorrectly, it is able to build the initialization file from the info\nin pg_class, so one would think it'd be better to just do that during\nevery startup and forget the initialization file.\n\n(b) if we can't just get rid of the init file, how about dropping and\nrebuilding it at the end of the initdb process (after template1 has\nbeen vacuumed)? Then at least it'd show a size of a few hundred for\npg_class, instead of zero.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 1999 11:54:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "> As near as I can tell, the bogus data is the fault of the relation\n> cache. Info about pg_class_oid_index and a couple of other indexes on\n> system relations is preloaded into the relcache and locked there on\n> startup --- and it is *not* coming from pg_class, but from an\n> initialization file that evidently was made when these system tables\n> were empty.\n> \n> Bottom line is that optimization estimates that involve these critical\n> system indexes will be wrong. That's not a show-stopper, but it seems\n> to me that it must be costing us performance somewhere along the line.\n> I'd like to see if it can be fixed.\n> \n> Does anyone understand:\n> \n> (a) why does the relcache need an initialization file for the system\n> index cache entries in the first place? If I'm reading the code\n> correctly, it is able to build the initialization file from the info\n> in pg_class, so one would think it'd be better to just do that during\n> every startup and forget the initialization file.\n\nThe problem is cicurular too. Without those entries in the cache, the\nsystem can't do the lookups of the real tables.\n\n> (b) if we can't just get rid of the init file, how about dropping and\n> rebuilding it at the end of the initdb process (after template1 has\n> been vacuumed)? Then at least it'd show a size of a few hundred for\n> pg_class, instead of zero.\n\nYou can't drop them or you could never recreate them. Why does the\nvacuum analyze at the end of initdb not fix this? Is this because the\ncache bypasses pg_class and returns the hardcoded rows?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 13:49:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> (a) why does the relcache need an initialization file for the system\n>> index cache entries in the first place?\n\n> The problem is cicurular too. Without those entries in the cache, the\n> system can't do the lookups of the real tables.\n\nBut the init file is built on-the-fly the first time it is needed;\nso it seems it can't be as circular as all that. If we *really* needed\nhardcoded data then it would have to be done more like the way the\nstandard entries in pg_class and other sys tables are made. I think.\n\n>> (b) if we can't just get rid of the init file, how about dropping and\n>> rebuilding it at the end of the initdb process (after template1 has\n>> been vacuumed)? Then at least it'd show a size of a few hundred for\n>> pg_class, instead of zero.\n\n> You can't drop them or you could never recreate them. Why does the\n> vacuum analyze at the end of initdb not fix this? Is this because the\n> cache bypasses pg_class and returns the hardcoded rows?\n\nThe vacuum analyze *does* fix the data that's in the pg_class entry\nfor the index. Trouble is that the relcache entry for the index is\nnever read from pg_class; it's loaded from this never-updated init file.\n\nOne possible answer is to rewrite the init file as the final step of\na vacuum, using the just-updated pg_class data. But I'm still not\nconvinced that we really need the init file at all...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 1999 15:06:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes " }, { "msg_contents": "> The vacuum analyze *does* fix the data that's in the pg_class entry\n> for the index. Trouble is that the relcache entry for the index is\n> never read from pg_class; it's loaded from this never-updated init file.\n> \n> One possible answer is to rewrite the init file as the final step of\n> a vacuum, using the just-updated pg_class data. But I'm still not\n> convinced that we really need the init file at all...\n\nCan you point me to that file?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 15:26:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can you point me to that file?\n\nThe code that does this stuff is init_irels() and write_irels() at the\nend of backend/utils/cache/relcache.c. The init file itself is\n\"pg_internal.init\" in the database directory (looks like there is one\nfor each database...).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 1999 15:52:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Can you point me to that file?\n> \n> The code that does this stuff is init_irels() and write_irels() at the\n> end of backend/utils/cache/relcache.c. The init file itself is\n> \"pg_internal.init\" in the database directory (looks like there is one\n> for each database...).\n> \n> \t\t\tregards, tom lane\n> \n\nIf you delete the file at the end of initdb, is it recreated with the\nproper values?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 16:11:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> If you delete the file at the end of initdb, is it recreated with the\n> proper values?\n\nOK, let's try it ...\n\nSure enough, if I delete the file and then start a new backend,\nit's rebuilt. Not only that, it's rebuilt with the *correct* index-\nsize values read from pg_class! And cost_index then gets that data\npassed to it.\n\nSo this code actually is able to go out and read the database, it just\ndoesn't want to ;-)\n\nI'd say this whole mechanism is unnecessary; we should just build\nthe data on-the-fly the way it's done in write_irels(), and eliminate\nall the file reading and writing code in init_irels and write_irels.\nThe only thing it could possibly be doing for us is saving some backend\nstartup time, but I'm not able to measure any difference when I delete\nthe init file.\n\nI'll work on that tomorrow, unless I hear squawks of outrage from\nsomeone who remembers what this code was for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Apr 1999 16:46:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > If you delete the file at the end of initdb, is it recreated with the\n> > proper values?\n> \n> OK, let's try it ...\n> \n> Sure enough, if I delete the file and then start a new backend,\n> it's rebuilt. Not only that, it's rebuilt with the *correct* index-\n> size values read from pg_class! And cost_index then gets that data\n> passed to it.\n> \n> So this code actually is able to go out and read the database, it just\n> doesn't want to ;-)\n> \n> I'd say this whole mechanism is unnecessary; we should just build\n> the data on-the-fly the way it's done in write_irels(), and eliminate\n> all the file reading and writing code in init_irels and write_irels.\n> The only thing it could possibly be doing for us is saving some backend\n> startup time, but I'm not able to measure any difference when I delete\n> the init file.\n> \n> I'll work on that tomorrow, unless I hear squawks of outrage from\n> someone who remembers what this code was for.\n\nHmm. If you can get it to work without the file, great, or you could\njust delete the file when vacuum is performed, so the next backend\nrecreates the file. That would work too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 16:50:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "> > I'd say this whole mechanism is unnecessary; we should just build\n> > the data on-the-fly the way it's done in write_irels(), and eliminate\n> > all the file reading and writing code in init_irels and write_irels.\n> > The only thing it could possibly be doing for us is saving some backend\n> > startup time, but I'm not able to measure any difference when I delete\n> > the init file.\n> > \n> > I'll work on that tomorrow, unless I hear squawks of outrage from\n> > someone who remembers what this code was for.\n> \n> Hmm. If you can get it to work without the file, great, or you could\n> just delete the file when vacuum is performed, so the next backend\n> recreates the file. That would work too.\n\nOne other cache thing I want to do is enable index lookups on cache\nfailure for tables like pg_operator, that didn't use index lookups in\nthe cache because you didn't have multi-key system tables.\n\nIf you are looking for a single row, does anyone know when it is faster\nto do a sequential scan, and when it is faster to do an index lookup\ninto the table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Apr 1999 16:56:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Hmm. If you can get it to work without the file, great, or you could\n> just delete the file when vacuum is performed, so the next backend\n> recreates the file. That would work too.\n\nThat's a good idea. I made a test database with a couple thousand\ntables in it, and found that when pg_class gets that big it does take\na measurable amount of time to rebuild the index info if the relcache\ninit file is not there. (Looked like about a third of a second on my\nmachine.) Since backend startup time is a hotbutton item for some\nfolks, I'm not going to take out the init file code. I'll just make\nVACUUM remove the file, and then the first backend start after a VACUUM\nwill rebuild the file with up-to-date statistics for the system indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 15:06:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Hmm. If you can get it to work without the file, great, or you could\n> > just delete the file when vacuum is performed, so the next backend\n> > recreates the file. That would work too.\n> \n> That's a good idea. I made a test database with a couple thousand\n> tables in it, and found that when pg_class gets that big it does take\n> a measurable amount of time to rebuild the index info if the relcache\n> init file is not there. (Looked like about a third of a second on my\n> machine.) Since backend startup time is a hotbutton item for some\n> folks, I'm not going to take out the init file code. I'll just make\n> VACUUM remove the file, and then the first backend start after a VACUUM\n> will rebuild the file with up-to-date statistics for the system indexes.\n\nThat sounds like a big win. 1/3 second is large. If they vacuum a\nsingle table, and it is not a system table, can the removal be skipped?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 1 May 1999 15:17:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Optimizer fed bad data about some system-table indexes" } ]
[ { "msg_contents": "The following select fails:\n\n> select invoiceid + 3 as type, memberid, 1, max(TotShippingHandling) \n> from InvoiceLineDetails \n> where TotShippingHandling <> 0 \n> group by type, memberid limit 10;\nERROR: replace_agg_clause: variable not in target list\n\nThe following select works (the + 3 has been eliminated):\n\n> select invoiceid as type, memberid, 1, max(TotShippingHandling) \n> from InvoiceLineDetails \n> where TotShippingHandling <> 0 \n> group by type, memberid limit 10;\n type|memberid|?column?| max\n-----+--------+--------+-----\n15499| 1626| 1| 6.00\n15524| 138| 1| 3.00\n15647| 1083| 1|20.00\n15653| 1230| 1| 4.00\n15659| 1600| 1| 3.00\n15671| 1276| 1| 3.50\n15672| 1494| 1| 3.00\n15673| 1653| 1| 4.50\n15674| 1624| 1| 6.00\n15675| 1406| 1| 7.00\n(10 rows)\n\nHere is a description of the view InvoiceLineDetails:\n\n> \\d InvoiceLineDetails\nView = invoicelinedetails\nQuery = Not a view\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| invoicelinesid | int4 |\n4 |\n| invoiceid | int4 |\n4 |\n| dateprinted | datetime |\n8 |\n| ordersid | int4 |\n4 |\n| ordertypeid | int4 |\n4 |\n| totshippinghandling | numeric |\nvar |\n| shippeddate | datetime |\n8 |\n| memberid | int4 |\n4 |\n| gift | numeric |\nvar |\n| shippinghandling | numeric |\nvar |\n| unitcost | numeric |\nvar |\n| unitprice | numeric |\nvar |\n| quantity | int4 |\n4 |\n| invamount | numeric |\nvar |\n| inventoryid | int4 |\n4 |\n| inventoryname | varchar() |\n0 |\n| inventorytypeid | int4 |\n4 |\n| inventorytypename | varchar() |\n32 |\n| categoriesid | int4 |\n4 |\n| tapenum | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\n", "msg_date": "Fri, 30 Apr 1999 20:20:18 -0500", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "A select with aggretion is failing, still subtle problems with ag\n\tgregation" }, { "msg_contents": "Michael J Davis <[email protected]> writes:\n> The following select fails:\n>> select invoiceid + 3 as type, memberid, 1, max(TotShippingHandling) \n>> from InvoiceLineDetails \n>> where TotShippingHandling <> 0 \n>> group by type, memberid limit 10;\n> ERROR: replace_agg_clause: variable not in target list\n\nYeah, \"GROUP BY\" on anything but a primitive column is still pretty\nhosed. I'm going to try to work on it this weekend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 11:39:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A select with aggretion is failing,\n\tstill subtle problems with aggregation" }, { "msg_contents": "Just one more question. If you remove the cache file so the next\nbackend creates it, could their be problems if another backend starts\nwhile the file is being created by another backend?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 1 May 1999 15:21:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "cache startup file" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> That sounds like a big win. 1/3 second is large. If they vacuum a\n> single table, and it is not a system table, can the removal be\n> skipped?\n\nI didn't do that; I just put an unconditional remove into vac_shutdown.\nIf you want to improve on that, be my guest ;-).\n\n> Just one more question. If you remove the cache file so the next\n> backend creates it, could their be problems if another backend starts\n> while the file is being created by another backend?\n\nThe code in relcache.c looks to be fairly robust --- if the file seems\nto be broken (ie, ends early) it will go off and rebuild the file.\nSo I suppose you could get an extra rebuild in that scenario.\n\nIf you wanted to be really paranoid you could have the writing code\ncreate the file under a temporary name (using the backend's PID) and\nrename it into place when done; that'd prevent any kind of worry about\nthe wrong things happening if two backends write the file at the same\ntime. But really, it shouldn't matter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 16:39:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cache startup file " }, { "msg_contents": "\nIs this done? I have added it to the list.\n\n\n\n> Michael J Davis <[email protected]> writes:\n> > The following select fails:\n> >> select invoiceid + 3 as type, memberid, 1, max(TotShippingHandling) \n> >> from InvoiceLineDetails \n> >> where TotShippingHandling <> 0 \n> >> group by type, memberid limit 10;\n> > ERROR: replace_agg_clause: variable not in target list\n> \n> Yeah, \"GROUP BY\" on anything but a primitive column is still pretty\n> hosed. I'm going to try to work on it this weekend.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:47:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A select with aggretion is failing,\n\tstill subtle problems with aggregation" } ]
[ { "msg_contents": "session-1:\nvac=> begin;\nBEGIN\nvac=> lock table t in row share mode;\nLOCK TABLE\nvac=> \n\nsession-2:\nvac=> begin;\nBEGIN\nvac=> lock table t in exclusive mode;\n-- waiting\n\nsession-3:\nvac=> begin;\nBEGIN\nvac=> lock table t in row share mode;\nLOCK TABLE\nvac=> \n\n???\n\nFirst, I agree to give some transaction extra priority\nwhile locking table A but _only_ if transaction already\nhas any kind of lock on _this_ table. I object to do this \nif transaction has only locks on _other_ tables - this\nwill result in starving.\n\nSecond, session-3 didn't hold any kind of locks but\nMyProc->lockQueue wasn't empty! Why?\n\nAnd last:\n\ntry to press Ctrl-D in session-3 and you'll get\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n\nin session-2. Core made for session-3:\n\n#5 0x8114fea in ExceptionalCondition (\n conditionName=0x814f9d8 \"!((((unsigned long)queue) > ShmemBase))\", \n exceptionP=0x816490c, detail=0x0, fileName=0x814f9cd \"shmqueue.c\", \n lineNumber=248) at assert.c:74\n#6 0x80defc3 in SHMQueueEmpty (queue=0x48) at shmqueue.c:248\n#7 0x80e2be4 in LockResolveConflicts (lockmethod=1, lock=0x30252d68, \n lockmode=6, xid=34309, xidentP=0x0) at lock.c:832\n#8 0x80e4690 in ProcLockWakeup (queue=0x30252d7c, lockmethod=1, \n lock=0x30252d68) at proc.c:683\n#9 0x80e3ade in LockReleaseAll (lockmethod=1, lockQueue=0x30252ce8)\n at lock.c:1456\n#10 0x80e438d in ProcKill (exitStatus=0, pid=24636) at proc.c:410\n#11 0x80dd92c in shmem_exit (code=0) at ipc.c:193\n#12 0x80dd861 in proc_exit (code=0) at ipc.c:139\n#13 0x80e8a8c in PostgresMain (argc=11, argv=0xefbfccfc, real_argc=6, \n real_argv=0xefbfd774) at postgres.c:1672\n\nVadim\n", "msg_date": "Sat, 01 May 1999 16:23:22 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "locking..." } ]
[ { "msg_contents": "\nCould there be a bug in the 'cluster' command in postgreSQL 6.4? If I\ntry to cluster a table with a name with more than 8 characters, psql\nsays:\n\n xxx=> create table x123456789 (xxx text);\n CREATE\n xxx=> create index x123 on x123456789 (xxx);\n CREATE\n xxx=> cluster x123 on x123456789;\n ERROR: Relation x1234567 Does Not Exist!\n\nIf I try this repeatedly, I get:\n\n xxx=> cluster x123 on x123456789;\n ERROR: temp_66c31 relation already exists \n\nI'm running PostgreSQL 6.4(.0) on i386 Linux (2.2.6). Please excuse if\nI overlooked something in the docs!\n\nBye, Ulf\n\n-- \n======================================================================\nUlf Mehlig <[email protected]>\n Center for Tropical Marine Ecology/ZMT, Bremen, Germany\n----------------------------------------------------------------------\n", "msg_date": "Sat, 1 May 1999 13:16:11 +0200", "msg_from": "Ulf Mehlig <[email protected]>", "msg_from_op": true, "msg_subject": "cluster truncates table name?" }, { "msg_contents": "Bruce Momjian <[email protected]>:\n> Sorry, I can't reproduce this here on 6.5beta.\n\nWhy sorry? I'm happy if it doesn't appear in 6.5 ;-) But then, could\nthere be a misconfiguration or something like that in my 6.4(.0)\ndatabase, or could it be that the problem silently disappeared on the\nway between 6.4 and 6.5? (I'm not moving to 6.5 beta because I have to\nfinish my theses, and nearly all the data is under postgreSQL ...)\n\nThank you anyway for your answer (and for postgreSQL ;-)\n\nregards,\nUlf\n\n\n----------------------------------------------------------------------\n> Could there be a bug in the 'cluster' command in postgreSQL 6.4? If I\n> try to cluster a table with a name with more than 8 characters, psql\n> says:\n> \n> xxx=> create table x123456789 (xxx text);\n> CREATE\n> xxx=> create index x123 on x123456789 (xxx);\n> CREATE\n> xxx=> cluster x123 on x123456789;\n> ERROR: Relation x1234567 Does Not Exist!\n\n-- \n======================================================================\nUlf Mehlig <[email protected]>\n Center for Tropical Marine Ecology/ZMT, Bremen, Germany\n----------------------------------------------------------------------\n", "msg_date": "Mon, 10 May 1999 18:38:57 +0200", "msg_from": "Ulf Mehlig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] cluster truncates table name?" }, { "msg_contents": "\n\nSorry, I can't reproduce this here on 6.5beta.\n\n\n> \n> Could there be a bug in the 'cluster' command in postgreSQL 6.4? If I\n> try to cluster a table with a name with more than 8 characters, psql\n> says:\n> \n> xxx=> create table x123456789 (xxx text);\n> CREATE\n> xxx=> create index x123 on x123456789 (xxx);\n> CREATE\n> xxx=> cluster x123 on x123456789;\n> ERROR: Relation x1234567 Does Not Exist!\n> \n> If I try this repeatedly, I get:\n> \n> xxx=> cluster x123 on x123456789;\n> ERROR: temp_66c31 relation already exists \n> \n> I'm running PostgreSQL 6.4(.0) on i386 Linux (2.2.6). Please excuse if\n> I overlooked something in the docs!\n> \n> Bye, Ulf\n> \n> -- \n> ======================================================================\n> Ulf Mehlig <[email protected]>\n> Center for Tropical Marine Ecology/ZMT, Bremen, Germany\n> ----------------------------------------------------------------------\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:46:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] cluster truncates table name?" } ]
[ { "msg_contents": "session-1:\nvac=> begin;\nBEGIN\nvac=> lock table t in row share mode;\nLOCK TABLE\n\nsession-2:\nvac=> begin;\nBEGIN\nvac=> lock table t in row share mode;\nLOCK TABLE\n\nsession-3:\nvac=> begin;\nBEGIN\nvac=> lock table t in row exclusive mode;\nLOCK TABLE\n\nsession-1:\nvac=> lock table t in share row exclusive mode;\n--waiting (conflicts with session-3 lock)\n\nsession-2:\nvac=> lock table t in share row exclusive mode;\nNOTICE: Deadlock detected -- See the lock(l) manual page for a possible cause.\nERROR: WaitOnLock: error on wakeup - Aborting this transaction\n\n???\n\nShareLockExclusive mode doesn't conflict with RowShare mode \n(though it is self-conflicting mode).\n\nComments in DeadLockCheck() say:\n /*\n * For findlock's wait queue, we are interested in\n * procs who are blocked waiting for a write-lock on\n * the table we are waiting on, and already hold a\n * lock on it. We first check to see if there is an\n * escalation deadlock, where we hold a readlock and\n * want a writelock, and someone else holds readlock\n * on the same table, and wants a writelock.\n *\n * Basically, the test is, \"Do we both hold some lock on\n * findlock, and we are both waiting in the lock\n * queue?\" bjm\n */\n\nUnfortunately, this is not right test any more -:(.\n\nVadim\n", "msg_date": "Sat, 01 May 1999 19:57:00 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "DeadLockCheck..." } ]
[ { "msg_contents": "but have no time to do them today and tomorrow -:(.\n\n1. Add int waitMask to LOCK to speedup checking in LockResolveConflicts:\n if lock requested conflicts with lock requested by any waiter \n (and we haven't any lock on this object) -> sleep\n\n2. Add int holdLock (or use prio) to PROC to let other know\n what locks we hold on object (described by PROC->waitLock)\n while we're waiting for lock of PROC->token type on\n this object.\n\n I assume that holdLock & token will let us properly \n and efficiently order waiters in LOCK->waitProcs queue\n (if we don't hold any lock on object -> go after\n all waiters with holdLock > 0, etc etc etc).\n\nComments?\n\nVadim\n", "msg_date": "Sat, 01 May 1999 23:23:28 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "I'm planning some changes in lmgr..." }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Sunday, May 02, 1999 12:23 AM\n> To: PostgreSQL Developers List\n> Subject: [HACKERS] I'm planning some changes in lmgr...\n> \n> \n> but have no time to do them today and tomorrow -:(.\n> \n> 1. Add int waitMask to LOCK to speedup checking in LockResolveConflicts:\n> if lock requested conflicts with lock requested by any waiter \n> (and we haven't any lock on this object) -> sleep\n> \n> 2. Add int holdLock (or use prio) to PROC to let other know\n> what locks we hold on object (described by PROC->waitLock)\n> while we're waiting for lock of PROC->token type on\n> this object.\n> \n> I assume that holdLock & token will let us properly \n> and efficiently order waiters in LOCK->waitProcs queue\n> (if we don't hold any lock on object -> go after\n> all waiters with holdLock > 0, etc etc etc).\n> \n> Comments?\n>\n\nFirst, I agree to check conflicts for ( total - own ) hodling lock of \nthe target object if transaction has already hold some lock on the \nobject and when some conflicts are detected,the transaction \nshould be queued with higher priority than transactions which hold \nno lock on the object.\n\nSecondly, if a transaction holds no lock on the object, we should \ncheck conflicts for ( holding + waiting ) lock of the object.\n\nAnd I have a question as to the priority of queueing.\nDoes the current definition of priority mean the urgency \nof lock ?\n\nIt may prevent lock escalation in some cases.\nBut is it effective to avoid deadlocks ? \nIt's difficult for me to find such a case.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 5 May 1999 11:17:05 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] I'm planning some changes in lmgr..." } ]
[ { "msg_contents": "Today, I tried cvs version of 6.5 and got following:\n\nc_flats=> explain select station_id from metro_stations except select metro_id from work_flats;\nERROR: copyObject: don't know how to copy 604\nc_flats=> select station_id from metro_stations except select metro_id from work_flats;\nstation_id\n----------\n 27\n 41\n 60\n 81\n 92\n 102\n 133\n 134\n 142\n 149\n 150\n(11 rows)\n\nExplain fails while select works (didn't check the result though)\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 1 May 1999 20:31:55 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> [ EXPLAIN fails on a SELECT ... EXCEPT ... ]\n\nThis looks to be another artifact of the fact that EXPLAIN\ndoesn't invoke the rewriter.\n\nIt looks to me like the rewrite-invocation section of\npg_parse_and_plan() needs to be pulled out as a subroutine\nso that explain.c can invoke it. Comments? Where should\nthe new subroutine go? (Probably not in postgres.c...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 16:26:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604 " }, { "msg_contents": "\nIs this a valid item?\n\n\n\n> Oleg Bartunov <[email protected]> writes:\n> > [ EXPLAIN fails on a SELECT ... EXCEPT ... ]\n> \n> This looks to be another artifact of the fact that EXPLAIN\n> doesn't invoke the rewriter.\n> \n> It looks to me like the rewrite-invocation section of\n> pg_parse_and_plan() needs to be pulled out as a subroutine\n> so that explain.c can invoke it. Comments? Where should\n> the new subroutine go? (Probably not in postgres.c...)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:48:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Is this a valid item?\n\nIt was; but I fixed it yesterday. At least it seems to work now.\n\n\t\t\tregards, tom lane\n\n\n>> Oleg Bartunov <[email protected]> writes:\n>>>> [ EXPLAIN fails on a SELECT ... EXCEPT ... ]\n>> \n>> This looks to be another artifact of the fact that EXPLAIN\n>> doesn't invoke the rewriter.\n>> \n>> It looks to me like the rewrite-invocation section of\n>> pg_parse_and_plan() needs to be pulled out as a subroutine\n>> so that explain.c can invoke it. Comments? Where should\n>> the new subroutine go? (Probably not in postgres.c...)\n", "msg_date": "Mon, 10 May 1999 13:24:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604 " }, { "msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> Date: Mon, 10 May 1999 12:48:15 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> \n> \n> Is this a valid item?\n\nCan't check this, because of problem to compile cvs:\n\nmake[2]: Entering directory /home/postgres/cvs/pgsql/src/lextest'\nflex scan.l\ngcc -c lex.yy.c\nlex.yy.c: In function \tnput':\nlex.yy.c:1046: parse error before )'\nlex.yy.c:243: warning: \u0019y_flex_realloc' declared \u0013tatic' but never defined\nlex.yy.c:273: warning: \u0019y_fatal_error' declared \u0013tatic' but never defined\nmake[2]: *** [lextest] Error 1\nmake[2]: Leaving directory /home/postgres/cvs/pgsql/src/lextest'\n\nWill test after somebody fix cvs :-) \nmira:~/cvs/pgsql/src$ flex --version\nflex version 2.5.4\n\n\tRegards,\n\n\t\tOleg\n\n\n> \n> \n> \n> > Oleg Bartunov <[email protected]> writes:\n> > > [ EXPLAIN fails on a SELECT ... EXCEPT ... ]\n> > \n> > This looks to be another artifact of the fact that EXPLAIN\n> > doesn't invoke the rewriter.\n> > \n> > It looks to me like the rewrite-invocation section of\n> > pg_parse_and_plan() needs to be pulled out as a subroutine\n> > so that explain.c can invoke it. Comments? Where should\n> > the new subroutine go? (Probably not in postgres.c...)\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 10 May 1999 21:56:14 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > Date: Mon, 10 May 1999 12:48:15 -0400 (EDT)\n> > From: Bruce Momjian <[email protected]>\n> > To: Tom Lane <[email protected]>\n> > Cc: Oleg Bartunov <[email protected]>, [email protected]\n> > Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> > \n> > \n> > Is this a valid item?\n> \n> Can't check this, because of problem to compile cvs:\n> \n> make[2]: Entering directory /home/postgres/cvs/pgsql/src/lextest'\n> flex scan.l\n> gcc -c lex.yy.c\n> lex.yy.c: In function \tnput':\n> lex.yy.c:1046: parse error before )'\n> lex.yy.c:243: warning: \u0019y_flex_realloc' declared \u0013tatic' but never defined\n> lex.yy.c:273: warning: \u0019y_fatal_error' declared \u0013tatic' but never defined\n> make[2]: *** [lextest] Error 1\n> make[2]: Leaving directory /home/postgres/cvs/pgsql/src/lextest'\n> \n\nNo one is playing with lextest, I hope. Perhaps it is something there?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 14:37:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> Date: Mon, 10 May 1999 12:48:15 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> \n> \n> Is this a valid item?\n> \n\nAfter compilation of current cvs at home I test this and it works.\nBut explain shows that select will not use indices, which exists !\nselect station_id from metro_stations except select metro_id from work_flats;\n\n\tOleg\n\n> \n> \n> > Oleg Bartunov <[email protected]> writes:\n> > > [ EXPLAIN fails on a SELECT ... EXCEPT ... ]\n> > \n> > This looks to be another artifact of the fact that EXPLAIN\n> > doesn't invoke the rewriter.\n> > \n> > It looks to me like the rewrite-invocation section of\n> > pg_parse_and_plan() needs to be pulled out as a subroutine\n> > so that explain.c can invoke it. Comments? Where should\n> > the new subroutine go? (Probably not in postgres.c...)\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 11 May 1999 19:44:58 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > Date: Mon, 10 May 1999 12:48:15 -0400 (EDT)\n> > From: Bruce Momjian <[email protected]>\n> > To: Tom Lane <[email protected]>\n> > Cc: Oleg Bartunov <[email protected]>, [email protected]\n> > Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> > \n> > \n> > Is this a valid item?\n> > \n> \n> After compilation of current cvs at home I test this and it works.\n> But explain shows that select will not use indices, which exists !\n> select station_id from metro_stations except select metro_id from work_flats;\n\nIndexes aren't much use if you don't have a WHERE restriction, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 May 1999 12:04:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" }, { "msg_contents": "On Tue, 11 May 1999, Bruce Momjian wrote:\n\n> Date: Tue, 11 May 1999 12:04:32 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tom Lane <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> \n> > On Mon, 10 May 1999, Bruce Momjian wrote:\n> > \n> > > Date: Mon, 10 May 1999 12:48:15 -0400 (EDT)\n> > > From: Bruce Momjian <[email protected]>\n> > > To: Tom Lane <[email protected]>\n> > > Cc: Oleg Bartunov <[email protected]>, [email protected]\n> > > Subject: Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604\n> > > \n> > > \n> > > Is this a valid item?\n> > > \n> > \n> > After compilation of current cvs at home I test this and it works.\n> > But explain shows that select will not use indices, which exists !\n> > select station_id from metro_stations except select metro_id from work_flats;\n> \n> Indexes aren't much use if you don't have a WHERE restriction, no?\n> \nOh, yes. \n\n\tOleg\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 11 May 1999 20:59:15 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5 cvs ERROR: copyObject: don't know how to copy 604" } ]
[ { "msg_contents": "Mail Delivery Subsystem wrote:\n> \n> The original message was received at Sun, 2 May 1999 00:35:36 +0800 (KRSS)\n> from dune.krs.ru [195.161.16.38]\n...ops...\n\n> Subject: Re: [INTERFACES] Some needed features in postgresql\n> Date: Sun, 02 May 1999 00:35:35 +0800\n> From: Vadim Mikheev <[email protected]>\n> Organization: OJSC Rostelecom (Krasnoyarsk)\n> To: Tom Lane <[email protected]>\n> CC: Peter Garner <[email protected]>, [email protected]\n> References: <[email protected]>\n> \n> Tom Lane wrote:\n> >\n> > > 2. Is there any work being done on the fact that\n> > > transactions are aborted when an error occurs? It\n> > > really lessens the usefulness of the product when,\n> > > after adding 20,000 records, the transaction aborts\n> > > because the 20,001th record is a dup key! :-)\n> >\n> > I've griped about that myself in the past, but it's not real\n> > clear how else it ought to work. Maybe you should be using\n> > smaller transactions ;-)\n> \n> Oracle sets implicit savepoint before processing any\n> user' statement.\n> \n> We still haven't savepoints but live with hope to implement\n> them in future -:) Actualy, having MVCC I would like to see\n> savepoints implemented as sooner as possible: there is\n> high possibility of aborts in serializable mode...\n> \n> Vadim\n", "msg_date": "Sun, 02 May 1999 00:38:03 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Returned mail: User unknown" } ]
[ { "msg_contents": "\n\nignore this...just makign sure I haen't rowken anything ...\n\n", "msg_date": "Sat, 1 May 1999 15:08:14 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "Just a test" } ]
[ { "msg_contents": "\nIgnore it...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 1 May 1999 16:59:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "One test..." } ]
[ { "msg_contents": "\nYou probably didnt' see the previous one though :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 1 May 1999 17:03:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Last test..." } ]
[ { "msg_contents": "I notice that both 6.4.2 and 6.5-current make no complaint if I do\n\ncreate table t1 (a int4, b int4);\n\nselect a+1,b from t1 group by b;\n\nShouldn't this draw \"Illegal use of aggregates or non-group column in\ntarget list\"? It does if there's an aggregate nearby:\n\nselect sum(a),a+1,b from t1 group by b;\nERROR: Illegal use of aggregates or non-group column in target list\nselect sum(a),b from t1 group by b;\n[ no problem ]\n\nI think the system is simply failing to apply the check in the case\nwhere there's a GROUP BY but no aggregate functions. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 1999 20:09:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Shouldn't this be invalid?" } ]
[ { "msg_contents": "Here it is. I think Tom mentioned this problem.\n\n---------------------------------------------------------------------------\n\n\nselect date_trunc('month',transdate), sum(quantity) from wolf group by\n1 order by 1\\g\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 May 1999 00:29:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "query dumping core" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here it is. I think Tom mentioned this problem.\n\n> select date_trunc('month',transdate), sum(quantity) from wolf group by\n> 1 order by 1\n> pqReadData() -- backend closed the channel unexpectedly.\n\nYeah, this is another one of the GROUP-BY-nontrivial-expression\nproblems. I'm on it. It looks like the HAVING patch broke the\ninteraction between union_planner() and make_groupPlan().\nI'm trying to restructure that code in a way that's more\nintelligible, as well as correct...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 May 1999 01:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] query dumping core " } ]
[ { "msg_contents": "Duno, try program called: Sam Spade at : http://www.download.com maybe\nit will help/maybe not. It will sometimes determine the IP time info.\nI'm one of those damn newbies! Ya know.\n\nGood luck.\n\nOn 30 Mar 1999 19:02:45 -0500, [email protected] (\"Jackson,\nDeJuan\") wrote:\n\n>Could one of you kinds soul point me to the PostgreSQL code for determining\n>Timezones and Daylight Savings. If I can assess the OS's database that\n>would be best. Thanks\n>\t-DEJ\n\n", "msg_date": "Sun, 02 May 1999 22:43:20 GMT", "msg_from": "[email protected] (KaTMiX)", "msg_from_op": true, "msg_subject": "Re: [OT] Timezones and Daylight savings." } ]
[ { "msg_contents": "I just committed a rewrite of union_planner, make_groupPlan, and\nrelated routines that corrects several of the bugs we've been\nseeing lately. In particular, cases involving nontrivial GROUP BY\nexpressions work again. The core of the problem was that the\nEXCEPT/HAVING patch broke some extremely delicate (and quite\nundocumented) interactions between these routines. I decided\nrewrite was better than (another layer of) patch, especially\nsince I could document along the way.\n\nThere are closely associated bugs in the rewriter and parser that\nI have not gone after. Jan's example still fails:\n\nCREATE TABLE t1 (a int4, b int4);\nCREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n\nSELECT count FROM v1;\n\nbecause the rewriter is mislabeling both the target column 'count'\nand the group-by column 'b' with resno 1. More interestingly,\ngiven the same view\n\nSELECT b FROM v1;\n\nalso fails, even though there is no resno conflict. The problem in\nthis case is that the query is marked hasAggs, even though all the\naggregates have been optimized out. By the time the planner realizes\nthat there are not in fact any aggregates, it's too late to recover\neasily, so for now I just made it report an error. Jan, how hard would\nit be to make the rewriter tell the truth in this case?\n\nAlso, the problem Michael Davis reported on 4/29 seems to be in the\nparser:\n\ninsert into si_tmpVerifyAccountBalances select invoiceid+3, memberid, 1,\nTotShippingHandling from InvoiceLineDetails where TotShippingHandling <> 0\nand InvoiceLinesID <= 100 group by invoiceid+3, memberid,\nTotShippingHandling;\nERROR: INSERT has more expressions than target columns\n\nsince that error message appears only in the parser. Thomas, did you\nchange anything recently in parsing of INSERT ... SELECT?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 May 1999 20:54:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "GROUP BY fixes committed" }, { "msg_contents": "> Also, the problem Michael Davis reported on 4/29 seems to be in the\n> parser:\n> insert into si_tmpVerifyAccountBalances select invoiceid+3, memberid, 1,\n> TotShippingHandling from InvoiceLineDetails where TotShippingHandling <> 0\n> and InvoiceLinesID <= 100 group by invoiceid+3, memberid,\n> TotShippingHandling;\n> ERROR: INSERT has more expressions than target columns\n> since that error message appears only in the parser. Thomas, did you\n> change anything recently in parsing of INSERT ... SELECT?\n\nNot that I know of. And I'm not sure what you mean by \"recently\". I've\nlooked at the CVS logs but those don't help much because, especially\nfor patches submitted by others, there is a generic description\nentered into the log which can't possibly describe the fixes in the\nparticular file. *sigh*\n\nAnyway, I'd lost the thread. Did Michael say that this is a\nrecently-introduced problem?\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 03 May 1999 17:23:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GROUP BY fixes committed" }, { "msg_contents": "\n\nNice summary. Where are we on the last item.\n \n\n> I just committed a rewrite of union_planner, make_groupPlan, and\n> related routines that corrects several of the bugs we've been\n> seeing lately. In particular, cases involving nontrivial GROUP BY\n> expressions work again. The core of the problem was that the\n> EXCEPT/HAVING patch broke some extremely delicate (and quite\n> undocumented) interactions between these routines. I decided\n> rewrite was better than (another layer of) patch, especially\n> since I could document along the way.\n> \n> There are closely associated bugs in the rewriter and parser that\n> I have not gone after. Jan's example still fails:\n> \n> CREATE TABLE t1 (a int4, b int4);\n> CREATE VIEW v1 AS SELECT b, count(b) FROM t1 GROUP BY b;\n> \n> SELECT count FROM v1;\n> \n> because the rewriter is mislabeling both the target column 'count'\n> and the group-by column 'b' with resno 1. More interestingly,\n> given the same view\n> \n> SELECT b FROM v1;\n> \n> also fails, even though there is no resno conflict. The problem in\n> this case is that the query is marked hasAggs, even though all the\n> aggregates have been optimized out. By the time the planner realizes\n> that there are not in fact any aggregates, it's too late to recover\n> easily, so for now I just made it report an error. Jan, how hard would\n> it be to make the rewriter tell the truth in this case?\n> \n> Also, the problem Michael Davis reported on 4/29 seems to be in the\n> parser:\n> \n> insert into si_tmpVerifyAccountBalances select invoiceid+3, memberid, 1,\n> TotShippingHandling from InvoiceLineDetails where TotShippingHandling <> 0\n> and InvoiceLinesID <= 100 group by invoiceid+3, memberid,\n> TotShippingHandling;\n> ERROR: INSERT has more expressions than target columns\n> \n> since that error message appears only in the parser. Thomas, did you\n> change anything recently in parsing of INSERT ... SELECT?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 12:54:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GROUP BY fixes committed" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Nice summary. Where are we on the last item.\n\nIt was still broken as of a day or two ago.\n\nI poked at it some more, and concluded that INSERT ... SELECT is pretty\nbroken when the SELECT includes GROUP BY. I didn't try to delve into\nthe code though, just experimented with different commands. Here are my\nnotes:\n\nTEST CONDITION:\nCREATE TABLE \"si_tmpverifyaccountbalances\" (\n \"type\" int4 NOT NULL,\n \"memberid\" int4 NOT NULL,\n \"categoriesid\" int4 NOT NULL,\n \"amount\" numeric);\nCREATE TABLE \"invoicelinedetails\" (\n \"invoiceid\" int4,\n \"memberid\" int4,\n \"totshippinghandling\" numeric,\n \"invoicelinesid\" int4);\n\nACCEPTED:\ninsert into si_tmpVerifyAccountBalances select invoiceid+3,\nmemberid, 1, TotShippingHandling from InvoiceLineDetails\ngroup by invoiceid+3, memberid;\n\nNOT ACCEPTED:\ninsert into si_tmpVerifyAccountBalances select invoiceid+3,\nmemberid, 1, TotShippingHandling from InvoiceLineDetails\ngroup by invoiceid+3, memberid, TotShippingHandling;\n\nProbably error check is including GROUP BY targets in its count of\nthings-to-be-inserted :-(. The behavior is quite inconsistent though.\nAlso, why doesn't the first example get rejected, since\nTotShippingHandling is neither GROUP BY nor an aggregate??\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 May 1999 13:40:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] GROUP BY fixes committed " }, { "msg_contents": "> ACCEPTED:\n> insert into si_tmpVerifyAccountBalances select invoiceid+3,\n> memberid, 1, TotShippingHandling from InvoiceLineDetails\n> group by invoiceid+3, memberid;\n> \n> NOT ACCEPTED:\n> insert into si_tmpVerifyAccountBalances select invoiceid+3,\n> memberid, 1, TotShippingHandling from InvoiceLineDetails\n> group by invoiceid+3, memberid, TotShippingHandling;\n> \n> Probably error check is including GROUP BY targets in its count of\n> things-to-be-inserted :-(. The behavior is quite inconsistent though.\n> Also, why doesn't the first example get rejected, since\n> TotShippingHandling is neither GROUP BY nor an aggregate??\n\nYikes. We check to make sure all non-agg columns are referenced in\nGROUP BY, but not that all GROUP BY's are in target list, perhaps?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 14:33:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] GROUP BY fixes committed" } ]
[ { "msg_contents": "subscribe pgsql-hackers [email protected]\n\n\n\n\n\n\n", "msg_date": "Mon, 03 May 1999 10:53:54 +0200", "msg_from": "Thomas Malkus <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "Hello,\n\nI´ve just downloaded the snapshot-version april, 28th. Doing a join between two tables, I got the message: \nERROR: hash table out of memory. Use -B parameter to increase buffers.\n\nWell, increasing the buffers to -B256 results to:\npqReadData() -- backend closed the channel unexpectedly.\n\nThe log says:\nFATAL: s_lock(4015e1c5) at bufmgr.c:1949, stuck spinlock. Aborting.\n\nFATAL: s_lock(4015e1c5) at bufmgr.c:1949, stuck spinlock. Aborting.\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 12803 exited with status 6\n\nThe exact query looks like:\n\nCREATE TABLE \"egal1\" (\n \"lfnr\" character varying,\n \"artnr\" character varying);\nCREATE TABLE \"egal2\" (\n \"lfnr\" character varying,\n \"name1\" character varying(21),\n \"eknr\" int2);\n\n\nCOPY \"egal1\" FROM stdin;\n-- a lot of data (130000 records)\nCOPY \"egal2\" FROM stdin;\n-- a lot of data (12000 records)\n\nSELECT b.lfnr, b.eknr, b.name1 FROM egal1 a, egal2 b WHERE a.lfnr = b.lfnr;\n\nAn EXPLAIN of this query\nHash Join (cost=11167.46 rows=138130 width=38)\n -> Seq Scan on egal1 a (cost=5644.26 rows=138129 width=12)\n -> Hash (cost=507.47 rows=11984 width=26)\n -> Seq Scan on egal2 b (cost=507.47 rows=11984 width=26)\n\nEXPLAIN\nI think to remember some similar error on an older version, right? \n\nKind regards,\n\n\nMichael Contzen\[email protected]\nDohle Systemberatung GmbH\nGermany\n\n\n\n\n\n\nHello,\n \nI´ve just downloaded the snapshot-version \napril, 28th. Doing a join between two tables, I got the message: \n\nERROR: hash table out of memory. Use -B \nparameter to increase buffers.\n \nWell, increasing the buffers to -B256 results \nto:\npqReadData() -- backend closed the channel \nunexpectedly.\n \nThe log says:\nFATAL: s_lock(4015e1c5) at bufmgr.c:1949, \nstuck spinlock. Aborting.\n \nFATAL: s_lock(4015e1c5) at bufmgr.c:1949, \nstuck spinlock. Aborting./usr/local/pgsql/bin/postmaster: reaping dead \nprocesses.../usr/local/pgsql/bin/postmaster: CleanupProc: pid 12803 exited \nwith status 6\n \nThe exact query looks \nlike:\n \nCREATE TABLE \"egal1\" ( \n\"lfnr\" character varying, \"artnr\" character \nvarying);CREATE TABLE \"egal2\" ( \"lfnr\" \ncharacter varying, \"name1\" character varying(21), \n \"eknr\" int2);\n\nCOPY \"egal1\" FROM \nstdin;\n-- a lot of data (130000 \nrecords)\n\nCOPY \"egal2\" FROM stdin;-- a \nlot of data (12000 records)SELECT \nb.lfnr, b.eknr, b.name1 FROM egal1 a, egal2 b WHERE a.lfnr = \nb.lfnr;\n \nAn EXPLAIN of this query\nHash Join (cost=11167.46 rows=138130 \nwidth=38)    -> Seq Scan on egal1 a (cost=5644.26 rows=138129 \nwidth=12)    -> Hash (cost=507.47 rows=11984 \nwidth=26)    -> Seq Scan on egal2 b (cost=507.47 \nrows=11984 width=26)\n \nEXPLAIN\n\nI think to remember some similar error on an older \nversion, right? \n \nKind regards,\n \n \nMichael Contzen\[email protected]\nDohle Systemberatung GmbH\nGermany", "msg_date": "Mon, 3 May 1999 13:53:40 +0100", "msg_from": "Michael Contzen <[email protected]>", "msg_from_op": true, "msg_subject": "an older problem? hash table out of memory" }, { "msg_contents": "Michael Contzen <[email protected]> writes:\n> I=B4ve just downloaded the snapshot-version april, 28th. Doing a join\n> between two tables, I got the message:\n> ERROR: hash table out of memory. Use -B parameter to increase buffers.\n\nI saw this too over the weekend, but didn't have time to look into it.\n\nAfter a quick eyeballing of nodeHash.c, I have a question for anyone\nwho's worked on the hashjoin code before: why is the sizing of the\nhash table driven off -B in the first place? It looks like the table\nwas once allocated in shared buffer memory, but it ain't anymore; it's\njust palloc'd. Offhand it seems like the -S (sort space) parameter\nmight be a better thing to use as the hashtable size control.\n\nThat specific error message comes out if the hashtable \"overflow area\"\nfills up because too many tuples landed in the same hashbucket. So you\ncan provoke it easily with a test case where a table contains a few\nthousand identical rows (which is in fact what my test data looked like;\ndunno about Michael's). In real life it'd have a small but definitely\nnot zero probability of happening. I'm surprised that we have not seen\nthis complaint more before. It's possible that the recent work on the\noptimizer has made it more likely to choose hashjoin than it used to be.\nAnyway, I think we'd better invest the work to make the overflow area\nexpansible.\n\n> Well, increasing the buffers to -B256 results to:\n> pqReadData() -- backend closed the channel unexpectedly.\n\nHmm, I didn't try that. There must be some other problem as well.\nWill look into it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 May 1999 10:29:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory " }, { "msg_contents": "Michael Contzen <[email protected]> writes:\n> Well, increasing the buffers to -B256 results to:\n> pqReadData() -- backend closed the channel unexpectedly.\n> The log says:\n> FATAL: s_lock(4015e1c5) at bufmgr.c:1949, stuck spinlock. Aborting.\n\nI can't reproduce that here... anyone else?\n\nThe \"hashtable out of memory\" problem is reproducible, however.\nI'm on it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 May 1999 20:21:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory " }, { "msg_contents": "> Michael Contzen <[email protected]> writes:\n> > Well, increasing the buffers to -B256 results to:\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > The log says:\n> > FATAL: s_lock(4015e1c5) at bufmgr.c:1949, stuck spinlock. Aborting.\n> \n> I can't reproduce that here... anyone else?\n> \n> The \"hashtable out of memory\" problem is reproducible, however.\n> I'm on it.\n\nHistorically, no one knows much about the hash routines.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 00:38:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The \"hashtable out of memory\" problem is reproducible, however.\n>> I'm on it.\n\n> Historically, no one knows much about the hash routines.\n\nWell, I've been learning some unpleasant truths :-(. Hope to have\nsome fixes to commit in the next few days.\n\nThe immediate cause of one coredump I saw was that someone who was\noverenthusiastically replacing sprintf's with snprintf's had written\n\n\tsnprintf(tempname, strlen(tempname), ...);\n\nwhere tempname points to just-allocated, quite uninitialized\nmemory. Exercise for the student: how many different ways can\nthis go wrong? Unsettling question: how many other places did\nthat someone make the same mistake??\n\nI don't have time for this right now, but it'd be a real good idea\nto grep the source for strlen near snprintf to see if this same\nproblem appears anywhere else...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 10:04:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> The \"hashtable out of memory\" problem is reproducible, however.\n> >> I'm on it.\n> \n> > Historically, no one knows much about the hash routines.\n> \n> Well, I've been learning some unpleasant truths :-(. Hope to have\n> some fixes to commit in the next few days.\n> \n> The immediate cause of one coredump I saw was that someone who was\n> overenthusiastically replacing sprintf's with snprintf's had written\n> \n> \tsnprintf(tempname, strlen(tempname), ...);\n\nHere they are. Can you properly fix them? Looks like good news that I\nfound one of the ones you found. The others may be OK:\n\n./backend/commands/view.c:\tsnprintf(buf, strlen(viewName) + 5, \"_RET%s\", viewName);\n./backend/executor/nodeHash.c:\tsnprintf(tempname, strlen(tempname), \"HJ%d.%d\", (int) MyProcPid, hjtmpcnt);\n./backend/libpq/pqcomm.c:\t\t\tsnprintf(PQerrormsg + strlen(PQerrormsg), ERROR_MSG_LENGTH,\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 13:21:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here they are. Can you properly fix them? Looks like good news that I\n> found one of the ones you found. The others may be OK:\n\nIs that all? Great, I was afraid we had some major problems lurking.\n\n> ./backend/commands/view.c:\tsnprintf(buf, strlen(viewName) + 5, \"_RET%s\", viewName);\n\nThis one is OK since viewName is passed in (and is valid, we hope).\n\n> ./backend/executor/nodeHash.c:\tsnprintf(tempname, strlen(tempname), \"HJ%d.%d\", (int) MyProcPid, hjtmpcnt);\n\nThis is the one I found. I'm still working on nodeHash but hope to\ncommit fixes in the next day or so.\n\n> ./backend/libpq/pqcomm.c:\t\t\tsnprintf(PQerrormsg + strlen(PQerrormsg), ERROR_MSG_LENGTH,\n\nThis is a bit bogus --- ERROR_MSG_LENGTH is the overall size of\nPQerrormsg, but we're concatenating to what's already in the buffer, so\nsnprintf's limit should really be ERROR_MSG_LENGTH - strlen(PQerrormsg).\nI applied a patch for consistency's sake, although I doubt this\nstatement could ever overrun the buffer in practice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 19:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] an older problem? hash table out of memory " } ]
[ { "msg_contents": ">> I have tried many combinations of things to speed this\n>> up as you all have suggested. I have had no success\n>> using \"copy\" at all because of problems with quotes\n>> and other punctuation in the data.\n>\n>I must tell you, this doesn't sound reasonable to me. It's usually very\n>easy, if you already have a program that writes down the fields, to make\n>sure it scans the contents thereof and adds a backslash before each tab,\n>newline and backslash in every one of the fields.\n\nIs there a doc somewhere about what characters are treated\nin some special way when doing a copy?\n\n\n\n\n", "msg_date": "Mon, 3 May 1999 09:10:06 -0500", "msg_from": "\"Frank Morton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Slow Inserts Again" }, { "msg_contents": "At 17:10 +0300 on 03/05/1999, Frank Morton wrote:\n\n\n> Is there a doc somewhere about what characters are treated\n> in some special way when doing a copy?\n\nYes, the documentation of the COPY command. The essence is that if you use\nthe default delimiter (tab), you need to put a backslash before each tab,\nnewline and backslash in each of the text fields. Oh, and null fields\nshould be converted to \\N.\n\nIt's all in the docs. Why don't you try to copy some of the rows into a\ntemporary table, and dump that table using pg_dump -a -t table_name dbname?\nIt may give you a clue.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 3 May 1999 17:27:25 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Slow Inserts Again" }, { "msg_contents": "Herouth Maoz wrote: \n> It's all in the docs. Why don't you try to copy some of the rows into a\n> temporary table, and dump that table using pg_dump -a -t table_name dbname?\n> It may give you a clue.\n\nthis is unrelated to the slow insert bug, but Herouth's suggestion\nreminded me of something that needs to be looked at before 6.5 is out of\nbeta: pg_dump seems to have problems with mixed case tablenames. There\ndoesn't seem to be a way to send a quoted tablename into pg_dump as the\nvalue for a -t option, in 6.4.2 (example below). Can someone try this on\n6.5beta? I know some issues with quoting output of pg_dump (i.e. COPY)\nwas addressed, I'm wondering if input parsing got touched.\n\nActually, groveling through the source, it looks like even 6.4.2 should\n\"do the right thing\": the query is built with the class (table) name\nwrapped with fmtId(), which should do exactly this quoting. Anyone else\nsee this?\n\nP.S. shouldn't the non existence of the table handed to pg_dump raise a\nuser visible error?\n\nP.P.S. How does one go about setting up a second version of PG to test\non the same machine, without interference with the production (older)\nversion? I've only got the one machine to test on.\n\n\ntest=> create table TestTable (a int, b text);\nCREATE\ntest=> create table \"TestTable\" (c int, d text);\nCREATE\ntest=> \\q\n$ pg_dump -t TestTable test\nCREATE TABLE \"testtable\" (\n \"a\" int4,\n \"b\" text);\nCOPY \"testtable\" FROM stdin;\n\\.\n$ pg_dump -t \"TestTable\" test\nCREATE TABLE \"testtable\" (\n \"a\" int4,\n \"b\" text);\nCOPY \"testtable\" FROM stdin;\n\\.\n$ pg_dump -t \\\"TestTable\\\" test\n$ pg_dump test\nCREATE TABLE \"testtable\" (\n \"a\" int4,\n \"b\" text);\nCREATE TABLE \"TestTable\" (\n \"c\" int4,\n \"d\" text);\nCOPY \"testtable\" FROM stdin;\n\\.\nCOPY \"TestTable\" FROM stdin;\n\\.\n\n\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 03 May 1999 12:01:31 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "pg_dump bug (was Re: [SQL] Slow Inserts Again)" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> P.S. shouldn't the non existence of the table handed to pg_dump raise a\n> user visible error?\n\nProbably.\n\n> P.P.S. How does one go about setting up a second version of PG to test\n> on the same machine, without interference with the production (older)\n> version? I've only got the one machine to test on.\n\nNo problem, I do that all the time. When running \"configure\", override\nthe default install location *and* the default port number. For\nexample, I build test versions using\n\n\t--with-pgport=5440 --prefix=/users/postgres/testversion\n\n(You might also want --enable-cassert when working with beta code.)\n\nBuild, and make install as per normal --- it will go into the directory\nyou specified as the prefix. Before doing initdb or starting the test\npostmaster, make sure you have environment set up to reference the test\nlocation; the critical stuff is PATH, PGLIB, PGDATA, and USER (initdb\nuses USER to decide who'll be superuser).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 May 1999 18:57:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug (was Re: [SQL] Slow Inserts Again) " }, { "msg_contents": "> ... pg_dump seems to have problems with mixed case tablenames. There\n> doesn't seem to be a way to send a quoted tablename into pg_dump as the\n> value for a -t option, in 6.4.2 (example below). Can someone try this on\n> 6.5beta? I know some issues with quoting output of pg_dump (i.e. COPY)\n> was addressed, I'm wondering if input parsing got touched.\n\npg_dump explicitly converts all table names to lowercase. I've got a\npatch which looks for a table name which starts with a double quote,\nand suppresses the case conversion if so:\n\n[postgres@golem pg_dump]$ pg_dump -t '\"MixedCase\"' postgres\nCREATE TABLE \"MixedCase\" (\n \"i\" int4);\nCOPY \"MixedCase\" FROM stdin;\n1\n2\n\\.\n\nPatch enclosed for you to try. Bruce, any reason not to apply this to\nthe tree?\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Tue, 04 May 1999 13:54:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug (was Re: [SQL] Slow Inserts Again)" }, { "msg_contents": "Apply, please. It is a bug fix.\n\n> > ... pg_dump seems to have problems with mixed case tablenames. There\n> > doesn't seem to be a way to send a quoted tablename into pg_dump as the\n> > value for a -t option, in 6.4.2 (example below). Can someone try this on\n> > 6.5beta? I know some issues with quoting output of pg_dump (i.e. COPY)\n> > was addressed, I'm wondering if input parsing got touched.\n> \n> pg_dump explicitly converts all table names to lowercase. I've got a\n> patch which looks for a table name which starts with a double quote,\n> and suppresses the case conversion if so:\n> \n> [postgres@golem pg_dump]$ pg_dump -t '\"MixedCase\"' postgres\n> CREATE TABLE \"MixedCase\" (\n> \"i\" int4);\n> COPY \"MixedCase\" FROM stdin;\n> 1\n> 2\n> \\.\n> \n> Patch enclosed for you to try. Bruce, any reason not to apply this to\n> the tree?\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n\n> *** pg_dump.c.orig\tThu Apr 15 05:08:53 1999\n> --- pg_dump.c\tTue May 4 13:47:01 1999\n> ***************\n> *** 606,615 ****\n> \t\t\t\t\tint\t\t\ti;\n> \n> \t\t\t\t\ttablename = strdup(optarg);\n> ! \t\t\t\t\tfor (i = 0; tablename[i]; i++)\n> ! \t\t\t\t\t\tif (isascii((unsigned char) tablename[i]) &&\n> ! \t\t\t\t\t\t\tisupper(tablename[i]))\n> ! \t\t\t\t\t\t\ttablename[i] = tolower(tablename[i]);\n> \t\t\t\t}\n> \t\t\t\tbreak;\n> \t\t\tcase 'v':\t\t\t/* verbose */\n> --- 606,626 ----\n> \t\t\t\t\tint\t\t\ti;\n> \n> \t\t\t\t\ttablename = strdup(optarg);\n> ! \t\t\t\t\t/* quoted string? Then strip quotes and preserve case... */\n> ! \t\t\t\t\tif (tablename[0] == '\"')\n> ! \t\t\t\t\t{\n> ! \t\t\t\t\t\tstrcpy(tablename, &tablename[1]);\n> ! \t\t\t\t\t\tif (*(tablename+strlen(tablename)-1) == '\"')\n> ! \t\t\t\t\t\t\t*(tablename+strlen(tablename)-1) = '\\0';\n> ! \t\t\t\t\t}\n> ! \t\t\t\t\t/* otherwise, convert table name to lowercase... */\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t{\n> ! \t\t\t\t\t\tfor (i = 0; tablename[i]; i++)\n> ! \t\t\t\t\t\t\tif (isascii((unsigned char) tablename[i]) &&\n> ! \t\t\t\t\t\t\t\tisupper(tablename[i]))\n> ! \t\t\t\t\t\t\t\ttablename[i] = tolower(tablename[i]);\n> ! \t\t\t\t\t}\n> \t\t\t\t}\n> \t\t\t\tbreak;\n> \t\t\tcase 'v':\t\t\t/* verbose */\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 12:55:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug (was Re: [SQL] Slow Inserts Again)" }, { "msg_contents": "> Apply, please. It is a bug fix.\n> > > ... pg_dump seems to have problems with mixed case tablenames.\n> > pg_dump explicitly converts all table names to lowercase. I've got a\n> > patch which looks for a table name which starts with a double quote,\n> > and suppresses the case conversion if so:\n\nAlready done :)\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 May 1999 06:38:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump bug (was Re: [SQL] Slow Inserts Again)" } ]
[ { "msg_contents": "Hi,\n\nI have some problems with the parser.\n\n1)\tOf the following queries, submitted with libpgtcl, the first two are\n\tparsed correctly while the third gives a parse error:\n\n\t1.\tselect 1\n\t2.\tselect 1; select 2;\n\t3.\tselect 1; select 2\n\tERROR: parser: parse error at or near \"\"\n\n\tIt seems that when a query consiste of two or more statements the\n\tlast one must be terminated by ';'. In my opinion this is not\n\tcorrect because it is not consistent with case 1) and because it\n\tbreaks many existing programs compatible with previous versions\n\tof pgsql where the syntax of point 2) was considered valid.\n\n\tThe same problem applies also to the body of sql functions, while it\n\tdoesn't apply to query submitted by psql because they are splitted\n\tin separate statements submitted one by one.\n\n2)\tThe following query does't work:\n\n\tcreate operator *= (\n \t\tleftarg=_varchar, \n \t\trightarg=varchar, \n \t\tprocedure=array_varchareq);\n\tERROR: parser: parse error at or near \"varchar\"\n\n\tThe query should work because it is consistent with the documented\n\tsyntax of the create operator:\n\n\tCommand: create operator\n\tDescription: create a user-defined operator\n\tSyntax:\n \tCREATE OPERATOR operator_name (\n \t[LEFTARG = type1][,RIGHTARG = type2]\n \t,PROCEDURE = func_name,\n \t[,COMMUTATOR = com_op][,NEGATOR = neg_op]\n \t[,RESTRICT = res_proc][,JOIN = join_proc][,HASHES]\n \t[,SORT1 = left_sort_op][,SORT2 = right_sort_op]);\n\n\tand varchar is a valid type name (it is in pg_type).\n\tAfter a litte experimenting it turned out that varchar is also a\n\treserved word and therefore not acceptable as a type name. To have\n\tthe above statement work you must quote the word \"varchar\".\n\n\tThis is somewhat inconsistent with the syntax of create operator\n\tand may confuse the user.\n\n3)\tThe above query introduces another problem. How can the user know\n\twhat is wrong in the input. In the example \"parse error at or near\"\n\tis not a very explicative message. If I had read \"reserved keyword\"\n\tinstead I would not have spent time trying to figure out what's\n\twrong in my query.\n\n\tThe parser should be made more verbose and helpful about errors.\n\n4)\tAnd another related question: if the casual user can be confused\n\tby obscure parser messages how can the postgres hacker debug the\n\tparser grammar? I tried with gdb but it is completely useless given\n\tthe way the parser work.\n\tIs there any tool or trick to debug the grammar?\n\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Mon, 3 May 1999 16:30:16 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": true, "msg_subject": "problems with parser" }, { "msg_contents": "Massimo Dal Zotto ha scritto:\n\n> Hi,\n>\n> I have some problems with the parser.\n>\n> 1) Of the following queries, submitted with libpgtcl, the first two are\n> parsed correctly while the third gives a parse error:\n>\n> 1. select 1\n> 2. select 1; select 2;\n> 3. select 1; select 2\n> ERROR: parser: parse error at or near \"\"\n>\n> It seems that when a query consiste of two or more statements the\n> last one must be terminated by ';'. In my opinion this is not\n> correct because it is not consistent with case 1) and because it\n> breaks many existing programs compatible with previous versions\n> of pgsql where the syntax of point 2) was considered valid.\n>\n> The same problem applies also to the body of sql functions, while it\n> doesn't apply to query submitted by psql because they are splitted\n> in separate statements submitted one by one.\n>\n> 2) The following query does't work:\n>\n> create operator *= (\n> leftarg=_varchar,\n> rightarg=varchar,\n> procedure=array_varchareq);\n> ERROR: parser: parse error at or near \"varchar\"\n>\n> The query should work because it is consistent with the documented\n> syntax of the create operator:\n>\n> Command: create operator\n> Description: create a user-defined operator\n> Syntax:\n> CREATE OPERATOR operator_name (\n> [LEFTARG = type1][,RIGHTARG = type2]\n> ,PROCEDURE = func_name,\n> [,COMMUTATOR = com_op][,NEGATOR = neg_op]\n> [,RESTRICT = res_proc][,JOIN = join_proc][,HASHES]\n> [,SORT1 = left_sort_op][,SORT2 = right_sort_op]);\n>\n> and varchar is a valid type name (it is in pg_type).\n> After a litte experimenting it turned out that varchar is also a\n> reserved word and therefore not acceptable as a type name. To have\n> the above statement work you must quote the word \"varchar\".\n\nYes but some times the parser understands the keyword varchar without \"\" as in:\n\ncreate function \"varchar\"(float) returns varchar as\n ^^^^^^ ^^^^^^\n'begin\n return $1;\nend;\n' language 'plpgsql';\nCREATE\n\ndrop table a;\nDROP\ncreate table a (a float8);\nCREATE\ninsert into a values (1.2);\nINSERT 128074 1\n\nor in the CAST()...\n\nselect cast(a as varchar) from a;\nvarchar\n-------\n 1.2\n(1 row)\n\nselect varchar(a) from a;\nERROR: parser: parse error at or near \"a\"\nselect \"varchar\"(a) from a;\nvarchar\n-------\n 1.2\n(1 row)\n\n\n\n>\n> This is somewhat inconsistent with the syntax of create operator\n> and may confuse the user.\n>\n> 3) The above query introduces another problem. How can the user know\n> what is wrong in the input. In the example \"parse error at or near\"\n> is not a very explicative message. If I had read \"reserved keyword\"\n> instead I would not have spent time trying to figure out what's\n> wrong in my query.\n>\n> The parser should be made more verbose and helpful about errors.\n>\n> 4) And another related question: if the casual user can be confused\n> by obscure parser messages how can the postgres hacker debug the\n> parser grammar? I tried with gdb but it is completely useless given\n> the way the parser work.\n> Is there any tool or trick to debug the grammar?\n>\n> --\n> Massimo Dal Zotto\n>\n> +----------------------------------------------------------------------+\n> | Massimo Dal Zotto email: [email protected] |\n> | Via Marconi, 141 phone: ++39-0461534251 |\n> | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n> | Italy pgp: finger [email protected] |\n> +----------------------------------------------------------------------\n\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n", "msg_date": "Tue, 04 May 1999 15:23:52 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] problems with parser" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have some problems with the parser.\n> \n> 1)\tOf the following queries, submitted with libpgtcl, the first two are\n> \tparsed correctly while the third gives a parse error:\n> \n> \t1.\tselect 1\n> \t2.\tselect 1; select 2;\n> \t3.\tselect 1; select 2\n> \tERROR: parser: parse error at or near \"\"\n> \n> \tIt seems that when a query consiste of two or more statements the\n> \tlast one must be terminated by ';'. In my opinion this is not\n> \tcorrect because it is not consistent with case 1) and because it\n> \tbreaks many existing programs compatible with previous versions\n> \tof pgsql where the syntax of point 2) was considered valid.\n> \n> \tThe same problem applies also to the body of sql functions, while it\n> \tdoesn't apply to query submitted by psql because they are splitted\n> \tin separate statements submitted one by one.\n\n\nAdded to TODO list.\n\n> \n> 2)\tThe following query does't work:\n> \n> \tcreate operator *= (\n> \t\tleftarg=_varchar, \n> \t\trightarg=varchar, \n> \t\tprocedure=array_varchareq);\n> \tERROR: parser: parse error at or near \"varchar\"\n> \n> \tThe query should work because it is consistent with the documented\n> \tsyntax of the create operator:\n> \n> \tCommand: create operator\n> \tDescription: create a user-defined operator\n> \tSyntax:\n> \tCREATE OPERATOR operator_name (\n> \t[LEFTARG = type1][,RIGHTARG = type2]\n> \t,PROCEDURE = func_name,\n> \t[,COMMUTATOR = com_op][,NEGATOR = neg_op]\n> \t[,RESTRICT = res_proc][,JOIN = join_proc][,HASHES]\n> \t[,SORT1 = left_sort_op][,SORT2 = right_sort_op]);\n> \n> \tand varchar is a valid type name (it is in pg_type).\n> \tAfter a litte experimenting it turned out that varchar is also a\n> \treserved word and therefore not acceptable as a type name. To have\n> \tthe above statement work you must quote the word \"varchar\".\n> \n> \tThis is somewhat inconsistent with the syntax of create operator\n> \tand may confuse the user.\n\n\nAlso added to TODO list.\n\n> \n> 3)\tThe above query introduces another problem. How can the user know\n> \twhat is wrong in the input. In the example \"parse error at or near\"\n> \tis not a very explicative message. If I had read \"reserved keyword\"\n> \tinstead I would not have spent time trying to figure out what's\n> \twrong in my query.\n> \n> \tThe parser should be made more verbose and helpful about errors.\n> \n> 4)\tAnd another related question: if the casual user can be confused\n> \tby obscure parser messages how can the postgres hacker debug the\n> \tparser grammar? I tried with gdb but it is completely useless given\n> \tthe way the parser work.\n> \tIs there any tool or trick to debug the grammar?\n\nI have not looked at this particular problem, but usually the errror\ngenerated by the parser are poor. Unfortunately, I don't know of any\nway to insert our own error messaged based on the type of parser\nfailure. This is locked up in yacc/bison.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 May 1999 13:06:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] problems with parser" }, { "msg_contents": "> > I have some problems with the parser.\n> > 1) Of the following queries, submitted with libpgtcl,\n\nMassimo, what version of Postgres are you running? Is this a new\nproblem in the v6.5 beta (which includes a few changes from Stefan\nwhich might have adversely affected the behavior)?\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 10 May 1999 17:35:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] problems with parser" }, { "msg_contents": "> \n> > > I have some problems with the parser.\n> > > 1) Of the following queries, submitted with libpgtcl,\n> \n> Massimo, what version of Postgres are you running? Is this a new\n> problem in the v6.5 beta (which includes a few changes from Stefan\n> which might have adversely affected the behavior)?\n\nIt was a snapshot of 10-15 days ago. I have seen the problem also in\nprevious snapshots. The problem is in the grammar in the definition\nof multiple queries but unfortunately I don't know yacc enough to fix\nthe bug.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Tue, 11 May 1999 20:41:28 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] problems with parser" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> > \n> > > > I have some problems with the parser.\n> > > > 1) Of the following queries, submitted with libpgtcl,\n> > \n> > Massimo, what version of Postgres are you running? Is this a new\n> > problem in the v6.5 beta (which includes a few changes from Stefan\n> > which might have adversely affected the behavior)?\n> \n> It was a snapshot of 10-15 days ago. I have seen the problem also in\n> previous snapshots. The problem is in the grammar in the definition\n> of multiple queries but unfortunately I don't know yacc enough to fix\n> the bug.\n\nThe bug still exists. Just start 'postgres' manually without the\npostmaster, and type in a query:\n\n\n\t#$ aspg gdb /u/pg/bin/postgres\n\tGNU gdb \n\tCopyright 1998 Free Software Foundation, Inc.\n\tGDB is free software, covered by the GNU General Public License, and you\n\tare\n\twelcome to change it and/or distribute copies of it under certain\n\tconditions.\n\tType \"show copying\" to see the conditions.\n\tThere is absolutely no warranty for GDB. Type \"show warranty\" for\n\tdetails.\n\tThis GDB was configured as \"i386-unknown-bsdi4.0\"...run -\n\t(gdb) run -D /u/pg/data test\n\tStarting program: /u/pg/bin/postgres -D /u/pg/data test\n\t\n\tPOSTGRES backend interactive interface \n\t$Revision: 1.111 $ $Date: 1999/05/09 23:31:47 $\n\t\n\t> select 1; select 2\n\tERROR: parser: parse error at or near \"\"\n\tERROR: parser: parse error at or near \"\"\n\n\n\t> select 1;select 2;\n\tblank\n\t 1: ?column? (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\t 1: ?column? = \"1\" (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\tblank\n\t 1: ?column? (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\t 1: ?column? = \"2\" (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 May 1999 15:03:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] problems with parser" } ]
[ { "msg_contents": "\nignore...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 3 May 1999 11:51:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Jsut a test of mail relaying ..." } ]
[ { "msg_contents": "Hi,\n\nhere are some patches for 6.5.0 which I already submitted but have never\nbeen applied. The patches are in the .tar.gz attachment at the end:\n\nvarchar-array.patch this patch adds support for arrays of bpchar() and\n varchar(), which where always missing from postgres.\n\n\t\t\tThese datatypes can be used to replace the _char4,\n\t\t\t_char8, etc., which were dropped some time ago.\n\nblock-size.patch this patch fixes many errors in the parser and other\n program which happen with very large query statements\n (> 8K) when using a page size larger than 8192.\n\n\t\t\tThis patch is needed if you want to submit queries\n\t\t\tlarger than 8K. Postgres supports tuples up to 32K\n\t\t\tbut you can't insert them because you can't submit\n\t\t\tqueries larger than 8K. My patch fixes this problem.\n\n The patch also replaces all the occurrences of `8192'\n and `1<<13' in the sources with the proper constants\n defined in include files. You should now never find\n\t\t\t8192 hardwired in C code, just to make code clearer.\n\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+", "msg_date": "Mon, 3 May 1999 18:06:21 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": true, "msg_subject": "new patches for 6.5.0" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> here are some patches for 6.5.0 which I already submitted but have never\n> been applied. The patches are in the .tar.gz attachment at the end:\n> \n> varchar-array.patch this patch adds support for arrays of bpchar() and\n> varchar(), which where always missing from postgres.\n> \n> \t\t\tThese datatypes can be used to replace the _char4,\n> \t\t\t_char8, etc., which were dropped some time ago.\n> \n> block-size.patch this patch fixes many errors in the parser and other\n> program which happen with very large query statements\n> (> 8K) when using a page size larger than 8192.\n> \n> \t\t\tThis patch is needed if you want to submit queries\n> \t\t\tlarger than 8K. Postgres supports tuples up to 32K\n> \t\t\tbut you can't insert them because you can't submit\n> \t\t\tqueries larger than 8K. My patch fixes this problem.\n> \n> The patch also replaces all the occurrences of `8192'\n> and `1<<13' in the sources with the proper constants\n> defined in include files. You should now never find\n> \t\t\t8192 hardwired in C code, just to make code clearer.\n\nApplied. This was still in my mailbox, waiting. Sorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 May 1999 15:06:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] new patches for 6.5.0" } ]
[ { "msg_contents": "Why both\n\n int pid;\n TransactionId xid;\n\nare used in XIDTAG?\n\nlock.c:\n * normal lock user lock\n *\n * lockmethod 1 2\n * tag.relId rel oid 0\n ^^^^^^^^^^^^^^^^^\nDue to this, user-lock LOCKTAG is always different from\nnormal-lock tag and so XIDTAG.lock is different also.\n\n * tag.ItemPointerData.ip_blkid block id lock id2\n * tag.ItemPointerData.ip_posid tuple offset lock id1\n * xid.pid 0 backend pid\n * xid.xid xid or 0 0\n\nWhy not get rid of XIDTAG.xid and use XIDTAG.pid equal\nto backend pid for both lock methods?\n\nComments?\n\nVadim\n", "msg_date": "Tue, 04 May 1999 00:18:00 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "XIDTAG ???" }, { "msg_contents": "> Why both\n> \n> int pid;\n> TransactionId xid;\n> \n> are used in XIDTAG?\n> \n> lock.c:\n> * normal lock user lock\n> *\n> * lockmethod 1 2\n> * tag.relId rel oid 0\n> ^^^^^^^^^^^^^^^^^\n> Due to this, user-lock LOCKTAG is always different from\n> normal-lock tag and so XIDTAG.lock is different also.\n> \n> * tag.ItemPointerData.ip_blkid block id lock id2\n> * tag.ItemPointerData.ip_posid tuple offset lock id1\n> * xid.pid 0 backend pid\n> * xid.xid xid or 0 0\n> \n> Why not get rid of XIDTAG.xid and use XIDTAG.pid equal\n> to backend pid for both lock methods?\n\nProbably no reason for the transaction id. I don't remember that being\nused at all.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 May 1999 12:27:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" }, { "msg_contents": "On Mon, 3 May 1999, Bruce Momjian wrote:\n\n> Probably no reason for the transaction id. I don't remember that being\n> used at all.\n\nDo you mean that there is no reason for the xid to exist, as it is not\nused? If so, then may I humbly request that it be left in for another\nsix months in the hopes of using a transaction processing monitor to\ndistribute postgres across multiple machines safely? I'll need the xid\nif and when I start that project, which will be after I finish the\nTPM. 8^)\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Mon, 3 May 1999 22:32:49 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" }, { "msg_contents": "> On Mon, 3 May 1999, Bruce Momjian wrote:\n> \n> > Probably no reason for the transaction id. I don't remember that being\n> > used at all.\n> \n> Do you mean that there is no reason for the xid to exist, as it is not\n> used? If so, then may I humbly request that it be left in for another\n> six months in the hopes of using a transaction processing monitor to\n> distribute postgres across multiple machines safely? I'll need the xid\n> if and when I start that project, which will be after I finish the\n> TPM. 8^)\n\nNo, I don't recommend removing it, but just not storing it in the lock\nsystem. There is no need for it there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 00:41:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" }, { "msg_contents": "> \n> > On Mon, 3 May 1999, Bruce Momjian wrote:\n> > \n> > > Probably no reason for the transaction id. I don't remember that being\n> > > used at all.\n> > \n> > Do you mean that there is no reason for the xid to exist, as it is not\n> > used? If so, then may I humbly request that it be left in for another\n> > six months in the hopes of using a transaction processing monitor to\n> > distribute postgres across multiple machines safely? I'll need the xid\n> > if and when I start that project, which will be after I finish the\n> > TPM. 8^)\n> \n> No, I don't recommend removing it, but just not storing it in the lock\n> system. There is no need for it there.\n\nI don't see any urgent reason for removing it. For the moment I would leave\nthe code as is. A distributed postgres sounds interesting.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Tue, 4 May 1999 09:31:35 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" }, { "msg_contents": "Massimo Dal Zotto ha scritto:\n\n> >\n> > > On Mon, 3 May 1999, Bruce Momjian wrote:\n> > >\n> > > > Probably no reason for the transaction id. I don't remember that being\n> > > > used at all.\n> > >\n> > > Do you mean that there is no reason for the xid to exist, as it is not\n> > > used? If so, then may I humbly request that it be left in for another\n\nIf I understand you are talking about to take off the xid type, if so,\nI want warn you that xmin is an xid type and we are using it as\na versioning-row on psqlodbc.\n\n> > > six months in the hopes of using a transaction processing monitor to\n> > > distribute postgres across multiple machines safely? I'll need the xid\n> > > if and when I start that project, which will be after I finish the\n> > > TPM. 8^)\n> >\n> > No, I don't recommend removing it, but just not storing it in the lock\n> > system. There is no need for it there.\n>\n> I don't see any urgent reason for removing it. For the moment I would leave\n> the code as is. A distributed postgres sounds interesting.\n>\n> --\n> Massimo Dal Zotto\n>\n> +----------------------------------------------------------------------+\n> | Massimo Dal Zotto email: [email protected] |\n> | Via Marconi, 141 phone: ++39-0461534251 |\n> | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n> | Italy pgp: finger [email protected] |\n> +----------------------------------------------------------------------+\n\n> ______________________________________________________________\n\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n", "msg_date": "Tue, 04 May 1999 14:05:39 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" }, { "msg_contents": "On Tue, 4 May 1999, Bruce Momjian wrote:\n\n> No, I don't recommend removing it, but just not storing it in the lock\n> system. There is no need for it there.\n\nAhh, sorry I misinterpreted you. Carry on!\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n", "msg_date": "Tue, 4 May 1999 10:53:39 -0400 (EDT)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] XIDTAG ???" } ]
[ { "msg_contents": "Here is a problem I got while experiencing with date type:\nwhen I use adate = 'today'::Date explain shows me postgres will use\nindex, but adate::Date = 'today'::Date (which is the same expression )doesn't \nuse index. This happens for 6.4.2 and 6.5 cvs.\n\n\tRegards,\n\n\t\tOleg\n\nPS.\n btw, how I can find 'something' older than a month, i.e.\n select * from titles where adate::date > 'today'::Date - '1 month'::timespan;\n\n\n\napod=> vacuum analyze;\nVACUUM\napod=> explain select count(*) from titles where adate::Date = 'today'::Date;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=64.10 size=0 width=0)\n -> Seq Scan on titles (cost=64.10 size=699 width=12)\n\nEXPLAIN\napod=> explain select count(*) from titles where adate = 'today'::Date;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=2.04 size=0 width=0)\n -> Index Scan using idx_adate on titles (cost=2.04 size=1 width=12)\n\nEXPLAIN\napod=> select version();\nversion \n------------------------------------------------------------------\nPostgreSQL 6.4.2 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.5\n(1 row)\n\napod=> \\d titles\n\nTable = titles\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| url | char() | 13 |\n| adate | date | 4 |\n| atitle | text | var |\n+----------------------------------+----------------------------------+-------+\nIndices: idx_adate\n idx_atitle\n idx_url\na\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 3 May 1999 21:35:55 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "adate::Date is equiv. to adate if adate is type of Date ? " }, { "msg_contents": "> btw, how I can find 'something' older than a month, i.e.\n> select * from titles\n> where adate::date > 'today'::Date - '1 month'::timespan;\n\n select * from titles\n where adate::date < 'today'::Date - '1 month'::timespan;\n\n??\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 May 1999 02:06:04 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] adate::Date is equiv. to adate if adate is type of Date\n\t?" } ]
[ { "msg_contents": "I believe the insert statement below worked in 6.5 as of early April (before\n4/5). When I pulled new code toward the end of April it stopped working.\n\n> -----Original Message-----\n> From:\tThomas Lockhart [SMTP:[email protected]]\n> Sent:\tMonday, May 03, 1999 11:23 AM\n> To:\tTom Lane\n> Cc:\[email protected]\n> Subject:\tRe: [HACKERS] GROUP BY fixes committed\n> \n> > Also, the problem Michael Davis reported on 4/29 seems to be in the\n> > parser:\n> > insert into si_tmpVerifyAccountBalances select invoiceid+3, memberid, 1,\n> > TotShippingHandling from InvoiceLineDetails where TotShippingHandling <>\n> 0\n> > and InvoiceLinesID <= 100 group by invoiceid+3, memberid,\n> > TotShippingHandling;\n> > ERROR: INSERT has more expressions than target columns\n> > since that error message appears only in the parser. Thomas, did you\n> > change anything recently in parsing of INSERT ... SELECT?\n> \n> Not that I know of. And I'm not sure what you mean by \"recently\". I've\n> looked at the CVS logs but those don't help much because, especially\n> for patches submitted by others, there is a generic description\n> entered into the log which can't possibly describe the fixes in the\n> particular file. *sigh*\n> \n> Anyway, I'd lost the thread. Did Michael say that this is a\n> recently-introduced problem?\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n", "msg_date": "Mon, 3 May 1999 12:57:01 -0500 ", "msg_from": "Michael J Davis <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] GROUP BY fixes committed" } ]
[ { "msg_contents": "hi, i'm using access97 linked to postgres(6.4.2) tables through the new \nv.6.4 odbc. i can open a form, it shows me data for an initial record, and \nthen bombs. here is the message in the log file -- i can't figure out why \nit is bombing. does anyone have a clue?? do those \"-\" or \"/\" in various \n\"vinvnum\" fields cause problems?? it shows valid data first, waits for a \nsecond, and then bombs!\n\nTIA, jt\n\nconn=50745568, query='declare SQL_CUR0308FC60 cursor for SELECT \n\"vnum\",\"vinvnum\",\"vinvrecdt\",\"vinvduedt\",\"vinvamt\",\"glnum\",\"checkdt\",\"ch \necknum\",\"ponum\",'#S_C_H#' FROM \"ap\" WHERE \"vnum\" = 20000 AND \"vinvnum\" = \n'13639 VOID' OR \"vnum\" = 20000 AND \"vinvnum\" = '138577-Void' OR \"vnum\" = \n20000 AND \"vinvnum\" = '142185-SHOP' OR \"vnum\" = 20000 AND \"vinvnum\" = \n'2Brother Credit 1/99' OR \"vnum\" = 20000 AND \"vinvnum\" = '37003 OFO-Chairs' \nOR \"vnum\" = 20000 AND \"vinvnum\" = 'Aberdene, SD' OR \"vnum\" = 20000 AND \n\"vinvnum\" = 'Adj for Nov' OR \"vnum\" = 20000 AND \"vinvnum\" = 'Adj for Oct \n97' OR \"vnum\" = 20000 AND \"vinvnum\" = 'Agson 132289' OR \"vnum\" = 20000 AND \n\"vinvnum\" = 'Air Trans''\nERROR from backend during send_query: 'ERROR: Unable to find an ordering \noperator '<' for type unknown.\n\tUse an explicit ordering operator or modify the query.'\nconn=50745568, query='ABORT'\nSTATEMENT ERROR: func=SC_execute, desc='', errnum=1, errmsg='Error while \nexecuting the query'\n \n -------------------------------------------------------- \n----\n hdbc=50745568, stmt=50920544, result=0\n manual_result=0, prepare=1, internal=0\n bindings=0, bindings_allocated=0\n parameters=50911028, parameters_allocated=20\n statement_type=0, statement='SELECT \n\"vnum\",\"vinvnum\",\"vinvrecdt\",\"vinvduedt\",\"vinvamt\",\"glnum\",\"checkdt\",\"ch \necknum\",\"ponum\",'#S_C_H#' FROM \"ap\" WHERE \"vnum\" = ? AND \"vinvnum\" = ? OR \n\"vnum\" = ? AND \"vinvnum\" = ? OR \"vnum\" = ? AND \"vinvnum\" = ? OR \"vnum\" = ? \nAND \"vinvnum\" = ? OR \"vnum\" = ? AND \"vinvnum\" = ? OR \"vnum\" = ? AND \n\"vinvnum\" = ? OR \"vnum\" = ? AND \"vinvnum\" = ? OR \"vnum\" = ? AND \"vinvnum\" = \n? OR \"vnum\" = ? AND \"vinvnum\" = ? OR \"vnum\" = ? AND \"vinvnum\" = ?'\n stmt_with_params='declare SQL_CUR0308FC60 cursor for \nSELECT \n\"vnum\",\"vinvnum\",\"vinvrecdt\",\"vinvduedt\",\"vinvamt\",\"glnum\",\"checkdt\",\"ch \necknum\",\"ponum\",'#S_C_H#' FROM \"ap\" WHERE \"vnum\" = 20000 AND \"vinvnum\" = \n'13639 VOID' OR \"vnum\" = 20000 AND \"vinvnum\" = '138577-Void' OR \"vnum\" = \n20000 AND \"vinvnum\" = '142185-SHOP' OR \"vnum\" = 20000 AND \"vinvnum\" = \n'2Brother Credit 1/99' OR \"vnum\" = 20000 AND \"vinvnum\" = '37003 OFO-Chairs' \nOR \"vnum\" = 20000 AND \"vinvnum\" = 'Aberdene, SD' OR \"vnum\" = 20000 AND \n\"vinvnum\" = 'Adj for Nov' OR \"vnum\" = 20000 AND \"vinvnum\" = 'Adj for Oct \n97' OR \"vnum\" = 20000 AND \"vinvnum\" = 'Agson 132289' OR \"vnum\" = 20000 AND \n\"vinvnum\" = 'Air Trans''\n data_at_exec=-1, current_exec_param=-1, put_data=0\n currTuple=-1, current_col=-1, lobj_fd=-1\n maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, \nscroll_concurrency=1\n cursor_name='SQL_CUR0308FC60'\n ----------------QResult Info \n-------------------------------\nCONN ERROR: func=SC_execute, desc='', errnum=110, errmsg='ERROR: Unable to \nfind an ordering operator '<' for type unknown.\n\tUse an explicit ordering operator or modify the query.'\n ------------------------------------------------------------\n\n", "msg_date": "Mon, 3 May 1999 14:03:19 -0400", "msg_from": "JT Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "error message" }, { "msg_contents": "JT Kirkpatrick ha scritto:\n\n> hi, i'm using access97 linked to postgres(6.4.2) tables through the new\n> v.6.4 odbc. i can open a form, it shows me data for an initial record, and\n> then bombs. here is the message in the log file -- i can't figure out why\n> it is bombing. does anyone have a clue?? do those \"-\" or \"/\" in various\n> \"vinvnum\" fields cause problems?? it shows valid data first, waits for a\n> second, and then bombs!\n>\n\nI had a similar error when I tried to order retrieved data by a field not in\nthe table or a calculated field.\nSeems that Access request an order by a field with an unknown type.\nI can emulate a similar message as:\n\nselect 'AAAAAA' union select 'ZZZZZZ' order by 1 asc;\nERROR: Unable to identify a binary operator '>' for types unknown and unknown\n\nselect 'aaaaaa' union select 'zzzzzz' order by 1;\nERROR: Unable to identify a binary operator '<' for types unknown and unknown\n\nMay be we need a default for UNKNOWN types (what do you think Thomas, if we\nmake unknown type = text type?)\n\nAny way. Try these functions:\n\ncreate function unknown_lt(unknown,unknown) returns bool as\n'declare\n i1 text;\n i2 text;\nbegin\n i1:= $1;\n i2:= $2;\n return (i1 < i2);\nend; ' language 'plpgsql';\nCREATE\n\ncreate operator < (\n leftarg=unknown,\n rightarg=unknown,\n procedure=unknown_lt,\n commutator='<',\n negator='>=',\n restrict=eqsel,\n join=eqjoinsel\n );\nCREATE\n\ncreate function unknown_gt(unknown,unknown) returns bool as\n'declare\n i1 text;\n i2 text;\nbegin\n i1:= $1;\n i2:= $2;\n return (i1 > i2);\nend; ' language 'plpgsql';\nCREATE\ncreate operator > (\n leftarg=unknown,\n rightarg=unknown,\n procedure=unknown_gt,\n commutator='>',\n negator='<=',\n restrict=eqsel,\n join=eqjoinsel\n );\nCREATE\n\nselect 'AAAAAA' union select 'ZZZZZZ' order by 1 asc;\n?column?\n--------\nAAAAAA\nZZZZZZ\n(2 rows)\n\nselect 'aaaaaa' union select 'zzzzzz' order by 1 desc;\n?column?\n--------\nzzzzzz\naaaaaa\n(2 rows)\n\nEOF\n\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n", "msg_date": "Tue, 04 May 1999 14:41:55 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] error message" } ]
[ { "msg_contents": "Today I run http_load to do some benchmark of my Web-DB application\nand found that under high load (it was about 18 postgres running ) postsmaster failed.\nRestarting postmaster produces error:\n\nIpcMemoryCreate: shmget failed (Identifier removed) key=5432010, size=24588, permission=700\nIpcMemoryIdGet: shmget failed (Identifier removed) key=5432010, size=24588, permission=0\nIpcMemoryAttach: shmat failed (Invalid argument) id=-2\nFATAL 1: AttachSLockMemory: could not attach segment\n\nI checked shared memory:\n23:27[zeus]:~>ipcs -a\n\n------ Shared Memory Segments --------\nshmid owner perms bytes nattch status \n10496 postgres 700 24588 5 dest \n10497 postgres 600 8852184 5 dest \n10498 postgres 600 96804 5 dest \n\n------ Semaphore Arrays --------\nsemid owner perms nsems status \n\n------ Message Queues --------\nmsqid owner perms used-bytes messages \n\nDoes this error means I need to increase shared memory ?\nMy setup: Linux 2.0.36, Dual PPRO, 256 Mb RAM\nPostgres 6.4.2\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 3 May 1999 23:35:14 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "posmaster failed under high load" }, { "msg_contents": "> Today I run http_load to do some benchmark of my Web-DB application\n> and found that under high load (it was about 18 postgres running ) postsmaster failed.\n> Restarting postmaster produces error:\n> \n> IpcMemoryCreate: shmget failed (Identifier removed) key=5432010, size=24588, permission=700\n> IpcMemoryIdGet: shmget failed (Identifier removed) key=5432010, size=24588, permission=0\n> IpcMemoryAttach: shmat failed (Invalid argument) id=-2\n> FATAL 1: AttachSLockMemory: could not attach segment\n> \n> I checked shared memory:\n> 23:27[zeus]:~>ipcs -a\n> \n> ------ Shared Memory Segments --------\n> shmid owner perms bytes nattch status \n> 10496 postgres 700 24588 5 dest \n> 10497 postgres 600 8852184 5 dest \n> 10498 postgres 600 96804 5 dest \n> \n> ------ Semaphore Arrays --------\n> semid owner perms nsems status \n> \n> ------ Message Queues --------\n> msqid owner perms used-bytes messages \n> \n> Does this error means I need to increase shared memory ?\n> My setup: Linux 2.0.36, Dual PPRO, 256 Mb RAM\n> Postgres 6.4.2\n\nI don't think so unless you increased the shared buffer size using -B\noption. Stock 6.4.2 is very buggy with the shared memory\nusage. Probably it's the cause. Try Tom Lane's fix or 6.5b. I have\ntested 6.5b with 128 backends running and it seems very stable.\n\nAnother possibility is you don't have enough file descriptors.\nWhat do you get by:\n$ cat /proc/sys/kernel/file-max\n1024 or so is not enough. You could increase it by:\necho 4096 > /proc/sys/kernel/file-max\nDecreasing the usage of file descriptors per backend is also a good idea.\ntry:\nulimit -n 20\nbefore starting postmaster.\n---\nTatsuo Ishii\n", "msg_date": "Tue, 04 May 1999 14:05:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "On Tue, 4 May 1999, Tatsuo Ishii wrote:\n\n> Date: Tue, 04 May 1999 14:05:33 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load \n> \n> > Today I run http_load to do some benchmark of my Web-DB application\n> > and found that under high load (it was about 18 postgres running ) postsmaster failed.\n> > Restarting postmaster produces error:\n> > \n> > IpcMemoryCreate: shmget failed (Identifier removed) key=5432010, size=24588, permission=700\n> > IpcMemoryIdGet: shmget failed (Identifier removed) key=5432010, size=24588, permission=0\n> > IpcMemoryAttach: shmat failed (Invalid argument) id=-2\n> > FATAL 1: AttachSLockMemory: could not attach segment\n> > \n> > I checked shared memory:\n> > 23:27[zeus]:~>ipcs -a\n> > \n> > ------ Shared Memory Segments --------\n> > shmid owner perms bytes nattch status \n> > 10496 postgres 700 24588 5 dest \n> > 10497 postgres 600 8852184 5 dest \n> > 10498 postgres 600 96804 5 dest \n> > \n> > ------ Semaphore Arrays --------\n> > semid owner perms nsems status \n> > \n> > ------ Message Queues --------\n> > msqid owner perms used-bytes messages \n> > \n> > Does this error means I need to increase shared memory ?\n> > My setup: Linux 2.0.36, Dual PPRO, 256 Mb RAM\n> > Postgres 6.4.2\n> \n> I don't think so unless you increased the shared buffer size using -B\n> option. Stock 6.4.2 is very buggy with the shared memory\n> usage. Probably it's the cause. Try Tom Lane's fix or 6.5b. I have\n> tested 6.5b with 128 backends running and it seems very stable.\n\nYes, I used 6.4.2 + LIMIT patch, I'll try 6.5 from cvs\nI run postmaster with -B 1024 option - is this too much ?\n\n> \n> Another possibility is you don't have enough file descriptors.\n> What do you get by:\n> $ cat /proc/sys/kernel/file-max\n> 1024 or so is not enough. You could increase it by:\n> echo 4096 > /proc/sys/kernel/file-max\n> Decreasing the usage of file descriptors per backend is also a good idea.\n> try:\n> ulimit -n 20\n> before starting postmaster.\n\nThanks a lot, I got several times a problem with file descriptors,\nit looks like every backend opens abot 90 files. I'll try your \nhints. Why not add your experience how to work with postgres under high\nload to Linux specific FAQ ?\n\n\tRegards,\n\t\tOleg\n> ---\n> Tatsuo Ishii\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 May 1999 10:10:55 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "> Thanks a lot, I got several times a problem with file descriptors,\n> it looks like every backend opens abot 90 files. I'll try your\n> hints. Why not add your experience how to work with postgres under \n> high load to Linux specific FAQ ?\n\nGood idea. Do you have time to add this topic yourself? Actually, the\ngeneral problem is common to all platforms, so we might want to write\nup something for the Admin Guide too.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 May 1999 13:03:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "> > I don't think so unless you increased the shared buffer size using -B\n> > option. Stock 6.4.2 is very buggy with the shared memory\n> > usage. Probably it's the cause. Try Tom Lane's fix or 6.5b. I have\n> > tested 6.5b with 128 backends running and it seems very stable.\n> \n> Yes, I used 6.4.2 + LIMIT patch, I'll try 6.5 from cvs\n\nYou need Tom Lane's share mem fix patch if you use 6.4.2. 6.5 has the\nfix.\n\n> I run postmaster with -B 1024 option - is this too much ?\n\nNo. -B 1024 means 8MB shared mem that should be ok on x86/Linux (I\nthink most x86 based Linux allow 32MB shared mem).\n\n> Thanks a lot, I got several times a problem with file descriptors,\n> it looks like every backend opens abot 90 files. I'll try your \n> hints.\n\nBut be carefull lower # of file descriptors per backend might cause\nlower performance because of the file opening overhead. So you should\nincrease the file table entries in the system first.\n\n>Why not add your experience how to work with postgres under high\n> load to Linux specific FAQ ?\n\nI'm not good at English, that is the reason:-)\n\nBTW, FreeBSD box has more serious problems than Linux since the\ndefault kernel has lower limit of file descriptors (~700). This should\nbe noted somewhere in the docs too.\n---\nTatsuo Ishii\n", "msg_date": "Wed, 05 May 1999 00:02:44 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "Interesting, I just tried to load my home machine and got very\nweird result:\n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 218 ? SW 0:02 (postmaster)\n 219 ? SW 0:02 (postmaster)\n 220 ? SW 0:01 (postmaster)\n 222 ? SW 0:02 (postmaster)\n 241 ? SW 0:02 (postmaster)\n 242 ? SW 0:02 (postmaster)\n 252 ? SW 0:01 (postmaster)\n 263 ? SW 0:01 (postmaster)\n 372 ? SW 0:00 (postmaster)\n 377 ? SW 0:00 (postmaster)\n 378 ? SW 0:00 (postmaster)\n 379 ? SW 0:00 (postmaster)\n 383 ? SW 0:00 (postmaster)\n 387 ? SW 0:00 (postmaster)\n 388 ? SW 0:00 (postmaster)\n 406 ? SW 0:00 (postmaster)\n\nSystem is still in working conditions and psql could connects !\nPostmasters seems dies with time, but after 15 minutes I still see\n7 postmasters.\nThis is my scenario and setup:\nP166, 64Mb RAM,\nLinux 2.2.7,PostgreSQL 6.4.2 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.5\n( This is a stock 6.4.2 + LIMIT feature patch, Tatsuo suggests to use\nTom's shared memory patches but I didn't find them )\n\nI have apache 1.3.6+mod_perl 1.19, Apache::DBI to open persistent\nconnection to database and Mason (http://www.masonhq.com) to\nproduce dynamical document which I retrieve using \nhttp_load from http://www.acme.com/jef/ which I found is quite\nuseful for testing because it doesn't big down a client machine\n(in my case client and server are on the same machine)\nSo, I run \n./http_load -verbose -checksum -rate 25 -fetches 40 TEST,\nwhere TEST is a file with URL to document.\nIf interesting here are benchmarks: \n40 fetches, 117 max parallel, 403192 bytes, in 9 seconds\n10079.8 mean bytes/connection\n4.44444 fetches/sec, 44799.1 bytes/sec\n38 bad checksums\n\nI'm going to test 6.5 cvs.\n\n\tRegards,\n\n\t\tOleg\n\n\nOn Wed, 5 May 1999, Tatsuo Ishii wrote:\n\n> Date: Wed, 05 May 1999 00:02:44 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tatsuo Ishii <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load \n> \n> > > I don't think so unless you increased the shared buffer size using -B\n> > > option. Stock 6.4.2 is very buggy with the shared memory\n> > > usage. Probably it's the cause. Try Tom Lane's fix or 6.5b. I have\n> > > tested 6.5b with 128 backends running and it seems very stable.\n> > \n> > Yes, I used 6.4.2 + LIMIT patch, I'll try 6.5 from cvs\n> \n> You need Tom Lane's share mem fix patch if you use 6.4.2. 6.5 has the\n> fix.\n> \n> > I run postmaster with -B 1024 option - is this too much ?\n> \n> No. -B 1024 means 8MB shared mem that should be ok on x86/Linux (I\n> think most x86 based Linux allow 32MB shared mem).\n> \n> > Thanks a lot, I got several times a problem with file descriptors,\n> > it looks like every backend opens abot 90 files. I'll try your \n> > hints.\n> \n> But be carefull lower # of file descriptors per backend might cause\n> lower performance because of the file opening overhead. So you should\n> increase the file table entries in the system first.\n> \n> >Why not add your experience how to work with postgres under high\n> > load to Linux specific FAQ ?\n> \n> I'm not good at English, that is the reason:-)\n> \n> BTW, FreeBSD box has more serious problems than Linux since the\n> default kernel has lower limit of file descriptors (~700). This should\n> be noted somewhere in the docs too.\n> ---\n> Tatsuo Ishii\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 May 1999 22:01:25 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "Well,\njust run test with 6.5 cvs and it looks much stable.\nI run ./http_load -rate 20 -verbose -fetches 80 TEST\n(notice, test is much stronger than in previous post) and got results:\n81 fetches, 393 max parallel, 809028 bytes, in 24 seconds\n9988 mean bytes/connection\n3.375 fetches/sec, 33709.5 bytes/sec\n\nMy machine was very-very load during this test - I saw peak\nload about 65, a lot of swapping but test completes and system\nafter 20 minutes of swapping remains usable. I still saw many\npostmasters (not postgres) processes running but after about \n30-40 minutes they gone. Actually pstree -a now shows\n \n |-postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe\n | |-(postmaster)\n | `-postmaster \n \nIs this a normal behaivour for postmaster ?\nI thought there is must be only one postmaster which forks postgres\nprocesses for every connection. Anyway, system is usable,\npostmaster survives and continue working ! 6.5 in this respect is much\nstable. I run postmaster with -B 1024 option. This test I run under\nLinux 2.2.7, so tomorrow I'll test on my production server which \nruns Linux 2.0.36, SMP, Dual PPRO, 256 Mb Ram. As I wrote 6.4.2 fails\nunder high load, so I'll test 6.5 cvs to be sure what's is critical\nkernel or postgres version.\n\n\tRegards,\n\n\t\tOleg\n\nOn Wed, 5 May 1999, Tatsuo Ishii wrote:\n\n> Date: Wed, 05 May 1999 00:02:44 +0900\n> From: Tatsuo Ishii <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tatsuo Ishii <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load \n> \n> > > I don't think so unless you increased the shared buffer size using -B\n> > > option. Stock 6.4.2 is very buggy with the shared memory\n> > > usage. Probably it's the cause. Try Tom Lane's fix or 6.5b. I have\n> > > tested 6.5b with 128 backends running and it seems very stable.\n> > \n> > Yes, I used 6.4.2 + LIMIT patch, I'll try 6.5 from cvs\n> \n> You need Tom Lane's share mem fix patch if you use 6.4.2. 6.5 has the\n> fix.\n> \n> > I run postmaster with -B 1024 option - is this too much ?\n> \n> No. -B 1024 means 8MB shared mem that should be ok on x86/Linux (I\n> think most x86 based Linux allow 32MB shared mem).\n> \n> > Thanks a lot, I got several times a problem with file descriptors,\n> > it looks like every backend opens abot 90 files. I'll try your \n> > hints.\n> \n> But be carefull lower # of file descriptors per backend might cause\n> lower performance because of the file opening overhead. So you should\n> increase the file table entries in the system first.\n> \n> >Why not add your experience how to work with postgres under high\n> > load to Linux specific FAQ ?\n> \n> I'm not good at English, that is the reason:-)\n> \n> BTW, FreeBSD box has more serious problems than Linux since the\n> default kernel has lower limit of file descriptors (~700). This should\n> be noted somewhere in the docs too.\n> ---\n> Tatsuo Ishii\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 May 1999 23:24:59 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "> My machine was very-very load during this test - I saw peak\n> load about 65, a lot of swapping but test completes and system\n> after 20 minutes of swapping remains usable. I still saw many\n> postmasters (not postgres) processes running but after about \n> 30-40 minutes they gone. Actually pstree -a now shows\n> \n> |-postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe\n> | |-(postmaster)\n> | `-postmaster \n\nps should show our process listing display change. They are postgres\nprocesses, but without the exec() call we used to do, it shows this way\nonly on OS's that don't support ps arg display changes from inside the\nprocess.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 21:35:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "On Tue, 4 May 1999, Bruce Momjian wrote:\n\n> Date: Tue, 4 May 1999 21:35:56 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Tatsuo Ishii <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load\n> \n> > My machine was very-very load during this test - I saw peak\n> > load about 65, a lot of swapping but test completes and system\n> > after 20 minutes of swapping remains usable. I still saw many\n> > postmasters (not postgres) processes running but after about \n> > 30-40 minutes they gone. Actually pstree -a now shows\n> > \n> > |-postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe\n> > | |-(postmaster)\n> > | `-postmaster \n> \n> ps should show our process listing display change. They are postgres\n> processes, but without the exec() call we used to do, it shows this way\n> only on OS's that don't support ps arg display changes from inside the\n> process.\n\nNo, it does on Linux.\n 5159 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 5168 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 5169 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 5170 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 5171 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n\nThat's why I noticed 10 or more (postmaster) processes, which eventually\ngone after 30-40 minutes.\n\n\tOleg\n\n> \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 5 May 1999 07:56:58 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> That's why I noticed 10 or more (postmaster) processes, which eventually\n> gone after 30-40 minutes.\n\nCould those be new backends that have been forked off the main\npostmaster, but haven't yet gotten around to changing their ps info?\nI'm not sure what would block a new backend for many minutes before\nit did that, however. Can you attach to one of these processes with\na debugger and get a backtrace to show what it's doing?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 09:33:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "On Wed, 5 May 1999, Tom Lane wrote:\n\n> Date: Wed, 05 May 1999 09:33:14 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load \n> \n> Oleg Bartunov <[email protected]> writes:\n> > That's why I noticed 10 or more (postmaster) processes, which eventually\n> > gone after 30-40 minutes.\n> \n> Could those be new backends that have been forked off the main\n> postmaster, but haven't yet gotten around to changing their ps info?\n> I'm not sure what would block a new backend for many minutes before\n> it did that, however. Can you attach to one of these processes with\n> a debugger and get a backtrace to show what it's doing?\n\nWell,\n\nhttp_load -r 40 -f 240 MASON-DBI\nresults:\n244 fetches, 1020 max parallel, 272060 bytes, in 52 seconds\n1115 mean bytes/connection\n4.69231 fetches/sec, 5231.92 bytes/sec\n\nBelow some output from ps and attached backtrace of one postmaster\nprocess.\n\n\tRegards,\n\n\t\tOleg\n\nPS.\nWill see what happens with those (postmasters)\n\n18:08[om]:~/app/www/http_load>w\n 6:09pm up 1:44, 3 users, load average: 44.92, 18.56, 7.08\n\n18:08[om]:/usr/local/etc/httpd/conf>psg post\n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 602 ? SW 0:00 (postmaster)\n 634 ? D 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 644 ? SW 0:00 (postmaster)\n 646 ? SW 0:00 (postmaster)\n 648 ? SW 0:00 (postmaster)\n 650 ? SW 0:00 (postmaster)\n 651 ? SW 0:00 (postmaster)\n 652 ? SW 0:00 (postmaster)\n 653 ? SW 0:00 (postmaster)\n 661 ? SW 0:00 (postmaster)\n 662 ? D 0:00 (postmaster)\n 663 ? SW 0:00 (postmaster)\n 664 ? SW 0:00 (postmaster)\n 665 ? D 0:00 (postmaster)\n 666 ? D 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n18:08[om]:/usr/local/etc/httpd/conf>psg post\n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 651 ? SW 0:00 (postmaster)\n 693 ? SW 0:00 (postmaster)\n 694 ? SW 0:00 (postmaster)\n 698 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 699 ? SW 0:00 (postmaster)\n 700 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 701 ? SW 0:00 (postmaster)\n 702 ? SW 0:00 (postmaster)\n18:31[om]:/usr/local/etc/httpd/conf>psg post\n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 651 ? SW 0:00 (postmaster)\n 693 ? SW 0:00 (postmaster)\n 694 ? SW 0:00 (postmaster)\n 698 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 699 ? SW 0:00 (postmaster)\n 700 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 701 ? SW 0:00 (postmaster)\n 702 ? SW 0:00 (postmaster)\n18:34[om]:/usr/local/etc/httpd/conf>\n\nom:~$ gdb 702\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux), \nCopyright 1996 Free Software Foundation, Inc...\n\n702: No such file or directory.\n\n(gdb) q\nom:~$ gdb /usr/local/pgsql/bin/postmaster 702\nGDB is free software and you are welcome to distribute copies of it\n under certain conditions; type \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB; type \"show warranty\" for details.\nGDB 4.16 (i486-slackware-linux), \nCopyright 1996 Free Software Foundation, Inc...\n\n/u/postgres/702: No such file or directory.\nAttaching to program /usr/local/pgsql/bin/postmaster', process 702\nReading symbols from /lib/libdl.so.1...done.\nReading symbols from /lib/libm.so.5...done.\nReading symbols from /lib/libtermcap.so.2...done.\nReading symbols from /lib/libc.so.5...done.\nReading symbols from /lib/ld-linux.so.1...done.\n0x40081464 in recv (sockfd=0, buffer=0xbfffa394, len=3221224256, \n flags=135555648)\n(gdb) bt\n#0 0x40081464 in recv (sockfd=0, buffer=0xbfffa394, len=3221224256, \n flags=135555648)\n#1 0x400a58e8 in __DTOR_END__ ()\n#2 0x80a2585 in pq_getbytes ()\n#3 0x80e1ed8 in SocketBackend ()\n#4 0x80e1f66 in ReadCommand ()\n#5 0x80e350c in PostgresMain ()\n#6 0x80ccf2a in DoBackend ()\n#7 0x80cca5b in BackendStartup ()\n#8 0x80cc1d7 in ServerLoop ()\n#9 0x80cbd63 in PostmasterMain ()\n#10 0x80a3059 in main ()\n#11 0x806121e in _start ()\n(gdb) \n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 5 May 1999 18:37:29 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> I'm not sure what would block a new backend for many minutes before\n>> it did that, however. Can you attach to one of these processes with\n>> a debugger and get a backtrace to show what it's doing?\n\n> Below some output from ps and attached backtrace of one postmaster\n> process.\n\nHmm, that backend is quite obviously done with initialization and\nwaiting for a client command. So why doesn't it show up as\n\"postgres ... idle\" in ps?\n\nI wonder whether we have the ps-info-setting operation in the wrong\nplace, ie at the bottom of the loop instead of the top, so that a\nbackend that hasn't yet received its first client command will never\nhave set the ps data. Will take a look.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 11:07:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > That's why I noticed 10 or more (postmaster) processes, which eventually\n> > gone after 30-40 minutes.\n> \n> Could those be new backends that have been forked off the main\n> postmaster, but haven't yet gotten around to changing their ps info?\n> I'm not sure what would block a new backend for many minutes before\n> it did that, however. Can you attach to one of these processes with\n> a debugger and get a backtrace to show what it's doing?\n\nI can't imagine what they would be waiting for, but it seems like a good\nguess.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 May 1999 11:17:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "On Wed, 5 May 1999, Bruce Momjian wrote:\n\n> Date: Wed, 5 May 1999 11:17:27 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load\n> \n> > Oleg Bartunov <[email protected]> writes:\n> > > That's why I noticed 10 or more (postmaster) processes, which eventually\n> > > gone after 30-40 minutes.\n> > \n> > Could those be new backends that have been forked off the main\n> > postmaster, but haven't yet gotten around to changing their ps info?\n> > I'm not sure what would block a new backend for many minutes before\n> > it did that, however. Can you attach to one of these processes with\n> > a debugger and get a backtrace to show what it's doing?\n> \n> I can't imagine what they would be waiting for, but it seems like a good\n> guess.\n> \n\nAfter more than 1 hour postmaster processes still running \n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 651 ? SW 0:00 (postmaster)\n 693 ? SW 0:00 (postmaster)\n 694 ? SW 0:00 (postmaster)\n 698 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 699 ? SW 0:00 (postmaster)\n 700 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 701 ? SW 0:00 (postmaster)\n 702 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n18:48[om]:/usr/local/etc/httpd/conf>psg post\n\n 76 ? S 0:00 postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe \n 651 ? SW 0:00 (postmaster)\n 693 ? SW 0:00 (postmaster)\n 694 ? SW 0:00 (postmaster)\n 698 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 699 ? SW 0:00 (postmaster)\n 700 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 701 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n 702 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd apod idle \n19:30[om]:/usr/local/etc/httpd/conf>\n\nIt's interesting, that process with pid 701 migrates from \n(postmaster) to postgres with normal ps output !\nIt seems postgres lives with his own life :-)\n\n\n\tregards,\n\t\tOleg\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 5 May 1999 19:33:46 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> It's interesting, that process with pid 701 migrates from \n> (postmaster) to postgres with normal ps output !\n\nYes, that's pretty strong evidence in favor of my theory (that these\nprocesses are just new backends that haven't received a command yet).\n\nI'm surprised that it takes so long for your test clients to issue their\nfirst commands --- is the test load *that* high, or do you have a\ndeliberate delay in there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 19:09:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "I wrote:\n> Oleg Bartunov <[email protected]> writes:\n>> It's interesting, that process with pid 701 migrates from \n>> (postmaster) to postgres with normal ps output !\n\n> Yes, that's pretty strong evidence in favor of my theory (that these\n> processes are just new backends that haven't received a command yet).\n\nNope, that theory is all wet --- the backend definitely does \nPS_SET_STATUS(\"idle\") before it waits for a query. Something is\n*really* peculiar here, since your backtrace shows that the backend\nhas reached the point of waiting for client input. It is not possible\nto get there without having done PS_SET_STATUS. So why does the process\nstill show up as \"(postmaster)\" in ps? Something is flaky about your\nsystem's support of ps status setting, I think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 20:25:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "On Wed, 5 May 1999, Tom Lane wrote:\n\n> Nope, that theory is all wet --- the backend definitely does \n> PS_SET_STATUS(\"idle\") before it waits for a query. Something is\n> *really* peculiar here, since your backtrace shows that the backend\n> has reached the point of waiting for client input. It is not possible\n> to get there without having done PS_SET_STATUS. So why does the process\n> still show up as \"(postmaster)\" in ps? Something is flaky about your\n> system's support of ps status setting, I think.\n\nYou never altered the task_struct, and so it's still 'postmaster' there.\nNote the W... the process is paged out, so the argv is not available!\n\nTaral\n\n", "msg_date": "Wed, 5 May 1999 20:50:17 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "On Wed, 5 May 1999, Taral wrote:\n\n> Date: Wed, 5 May 1999 20:50:17 -0500 (CDT)\n> From: Taral <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load \n> \n> On Wed, 5 May 1999, Tom Lane wrote:\n> \n> > Nope, that theory is all wet --- the backend definitely does \n> > PS_SET_STATUS(\"idle\") before it waits for a query. Something is\n> > *really* peculiar here, since your backtrace shows that the backend\n> > has reached the point of waiting for client input. It is not possible\n> > to get there without having done PS_SET_STATUS. So why does the process\n> > still show up as \"(postmaster)\" in ps? Something is flaky about your\n> > system's support of ps status setting, I think.\n> \n> You never altered the task_struct, and so it's still 'postmaster' there.\n> Note the W... the process is paged out, so the argv is not available!\n\n\nThe system was under very high load, at peak load was about 69 \n(actually, it could be higher, I just wasn't able to enter a command :-)\nClient (http_load from http://www.acme.com) tests checksum for every\nconnection, so definetely command was issued and backend returns a result.\n\n\tOleg\n\n> \n> Taral\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 6 May 1999 07:53:16 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "> On Wed, 5 May 1999, Tom Lane wrote:\n> \n> > Nope, that theory is all wet --- the backend definitely does \n> > PS_SET_STATUS(\"idle\") before it waits for a query. Something is\n> > *really* peculiar here, since your backtrace shows that the backend\n> > has reached the point of waiting for client input. It is not possible\n> > to get there without having done PS_SET_STATUS. So why does the process\n> > still show up as \"(postmaster)\" in ps? Something is flaky about your\n> > system's support of ps status setting, I think.\n> \n> You never altered the task_struct, and so it's still 'postmaster' there.\n> Note the W... the process is paged out, so the argv is not available!\n\nYes, I remember now. To do ps-args you need to read the process address\nspace. If it is paged out, ps does not bring in the pages just to read\nthe args. This is probably as expected. If someone wants to add a\nlinux-specific fix for this, I guess you could, though I am not sure it\nis worth it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 May 1999 01:20:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "On Thu, 6 May 1999, Bruce Momjian wrote:\n\n> Date: Thu, 6 May 1999 01:20:13 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Taral <[email protected]>\n> Cc: Tom Lane <[email protected]>, Oleg Bartunov <[email protected]>,\n> [email protected]\n> Subject: Re: [HACKERS] posmaster failed under high load\n> \n> > On Wed, 5 May 1999, Tom Lane wrote:\n> > \n> > > Nope, that theory is all wet --- the backend definitely does \n> > > PS_SET_STATUS(\"idle\") before it waits for a query. Something is\n> > > *really* peculiar here, since your backtrace shows that the backend\n> > > has reached the point of waiting for client input. It is not possible\n> > > to get there without having done PS_SET_STATUS. So why does the process\n> > > still show up as \"(postmaster)\" in ps? Something is flaky about your\n> > > system's support of ps status setting, I think.\n> > \n> > You never altered the task_struct, and so it's still 'postmaster' there.\n> > Note the W... the process is paged out, so the argv is not available!\n> \n> Yes, I remember now. To do ps-args you need to read the process address\n> space. If it is paged out, ps does not bring in the pages just to read\n> the args. This is probably as expected. If someone wants to add a\n> linux-specific fix for this, I guess you could, though I am not sure it\n> is worth it.\n> \n\nHow to explain that process with PID 701 which shown in ps output\nas (postmaster) after some time becomes looks as usual postgres\n\n\tOleg\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 6 May 1999 09:59:14 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "On Thu, 6 May 1999, Oleg Bartunov wrote:\n\n> How to explain that process with PID 701 which shown in ps output\n> as (postmaster) after some time becomes looks as usual postgres\n\nBecause 'postmaster' is written in the kernel task_struct, whereas the\ntask's argv[] says 'postgres'.\n\nThe only way to get around this is to do an execv(), at which point the\nkernel will recopy argv[0].\n\nTaral\n\n", "msg_date": "Thu, 6 May 1999 01:05:11 -0500 (CDT)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "> On Thu, 6 May 1999, Oleg Bartunov wrote:\n> \n> > How to explain that process with PID 701 which shown in ps output\n> > as (postmaster) after some time becomes looks as usual postgres\n> \n> Because 'postmaster' is written in the kernel task_struct, whereas the\n> task's argv[] says 'postgres'.\n> \n> The only way to get around this is to do an execv(), at which point the\n> kernel will recopy argv[0].\n\nWe used to do execv(), but stopped doing it for performance reasons.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 May 1999 02:36:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The only way to get around this is to do an execv(), at which point the\n>> kernel will recopy argv[0].\n\n> We used to do execv(), but stopped doing it for performance reasons.\n\nIt's clearly not worth re-introducing the exec call just to make ps\nstatus display work (especially since it's only failing when the backend\nis swapped out). However, I wonder whether there is another answer.\n\nSomething that's been on my to-do list since the ps-status-display code\ngot added is to import \"sendmail\"'s ps-status-display module lock, stock,\nand barrel. Sendmail's code is kinda ugly, but it's been wrung out and\nworks on a wide variety of Unixes. The code we have doesn't ... (it's\nnever worked on my HPUX box, for instance, whereas sendmail does).\n\nI have no idea at the moment whether sendmail knows how to change the\ntask_struct on Linux; but it might.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 May 1999 10:20:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " }, { "msg_contents": "On Thu, 6 May 1999, Tom Lane wrote:\n> Something that's been on my to-do list since the ps-status-display code\n> got added is to import \"sendmail\"'s ps-status-display module lock, stock,\n> and barrel. Sendmail's code is kinda ugly, but it's been wrung out and\n> works on a wide variety of Unixes. The code we have doesn't ... (it's\n> never worked on my HPUX box, for instance, whereas sendmail does).\n\n The code was touched by wu-ftpd authors once. When I thought of stealing\nthe code I found sources in wu-ftpd distribution a little cleaner.\n\n> \t\t\tregards, tom lane\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 6 May 1999 18:43:39 +0400 (MSD)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] posmaster failed under high load " } ]
[ { "msg_contents": "\n\n", "msg_date": "Tue, 4 May 1999 09:39:53 +1000", "msg_from": "\"Ciaran Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Tue, 4 May 1999 09:40:19 +1000", "msg_from": "\"Ciaran Dunn\" <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "That's true - but it wasn't until recently that the lo examples had\nBEGIN and END in them.\n\nIt's on my list to get the JDBC examples to setAutoCommit(false).\nHopefully I'll get some time today to do this.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Sunday, May 02, 1999 6:12 PM\nTo: [email protected]\nCc: postgres\nSubject: Re: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc \n\n\nTatsuo Ishii <[email protected]> writes:\n> So far I couldn't find nothing special with the backend by now. Going\n> back to the ImageViewer, I think I found possible problem with it. In\n> my understanding, every lo call should be in single transaction block.\n\n> But ImageViwer seems does not give any \"begin\" or \"end\" SQL commands.\n> I made a small modifications(see below patches) to the ImageViewer and\n> now it starts to work again with 6.5 backend!\n\nHmm. The documentation does say somewhere that LO object handles are\nonly good within a transaction ... so it's amazing this worked reliably\nunder 6.4.x.\n\nIs there any way we could improve the backend's LO functions to defend\nagainst this sort of misuse, rather than blindly accepting a stale\nfilehandle?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 4 May 1999 08:25:04 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc " } ]
[ { "msg_contents": "> > > btw, how I can find 'something' older than a month\n> > select * from titles\n> > where adate::date < 'today'::Date - '1 month'::timespan;\n> this problem doesn't works:\n> apod=> select * from titles\n> apod-> where adate::date < 'today'::Date - '1 month'::timespan;\n> ERROR: There is more than one possible operator '<'\n> for types 'date' and 'datetime'\n> but if I specify Datetime instead of Date it works, but still doesn't\n> use index.\n> apod=> explain select * from titles\n> apod-> where adate::datetime < 'today'::Datetime\n> apod-> - '1 month'::timespan;\n> NOTICE: QUERY PLAN:\n> Seq Scan on titles (cost=64.10 size=466 width=28)\n\nOK, try\n\n select * from titles\n where adate < date('today'::Datetime - '1 month'::timespan);\n\nalthough there may (still) be problems with Postgres recognizing that\nit could use an index when the \"constant\" is an expression.\n\nLet us know what you find out...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 May 1999 13:12:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] adate::Date is equiv. to adate if adate is type of Date\n\t?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> OK, try\n>\n> select * from titles\n> where adate < date('today'::Datetime - '1 month'::timespan);\n>\n> although there may (still) be problems with Postgres recognizing that\n> it could use an index when the \"constant\" is an expression.\n\nI'm afraid I can already predict the answer: the optimizer only knows\nhow to use an index to constrain the scan when it finds a WHERE clause\nlike \"var op constant\" or \"constant op var\". What you've got there\nisn't a constant.\n\nThe right solution, of course, is to put in a rewrite phase that does\nconstant-expression folding (probably after any rule-generated changes).\nWe've talked about that before, but it ain't gonna happen for 6.5.\n\nBTW, the original question was why \"where adate::date < 'today'::date\"\nwouldn't work. What the optimizer sees in that case is\n\twhere function(var) < constant\nso it doesn't know how to use an index for that either. Now, if you\nhad a functional index matching the function, it would know what to do.\nBut it'd be pretty silly to keep a separate functional index just to let\nthis work, seeing as how adate is already a date.\n\nIt might be nice if the parser could drop dummy type conversions\ninstead of leaving them as functions in the parse tree... although\ndoing that as part of a general constant-expression folder is probably\na better answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 10:32:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] adate::Date is equiv. to adate if adate is type of Date\n\t?" }, { "msg_contents": "On Tue, 4 May 1999, Thomas Lockhart wrote:\n\n> Date: Tue, 04 May 1999 13:12:07 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Postgres Hackers List <[email protected]>\n> Subject: Re: [HACKERS] adate::Date is equiv. to adate if adate is type of Date ?\n> \n> > > > btw, how I can find 'something' older than a month\n> > > select * from titles\n> > > where adate::date < 'today'::Date - '1 month'::timespan;\n> > this problem doesn't works:\n> > apod=> select * from titles\n> > apod-> where adate::date < 'today'::Date - '1 month'::timespan;\n> > ERROR: There is more than one possible operator '<'\n> > for types 'date' and 'datetime'\n> > but if I specify Datetime instead of Date it works, but still doesn't\n> > use index.\n> > apod=> explain select * from titles\n> > apod-> where adate::datetime < 'today'::Datetime\n> > apod-> - '1 month'::timespan;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on titles (cost=64.10 size=466 width=28)\n> \n> OK, try\n> \n> select * from titles\n> where adate < date('today'::Datetime - '1 month'::timespan);\n> \n> although there may (still) be problems with Postgres recognizing that\n> it could use an index when the \"constant\" is an expression.\n> \n> Let us know what you find out...\n\nNo, it's doing Seq Scan. I checked with 6.4.2 and current 6.5 cvs\n\n\tOleg\n\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 May 1999 19:46:20 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] adate::Date is equiv. to adate if adate is type of Date\n\t?" } ]
[ { "msg_contents": "It is best to post these reports to a mailing list. I believe that\nthis specific topic was discussed recently. Look in the mhonarc\nmailing list archives on www.postgresql.org for more details.\n\nGood luck.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n============================================================================\n\n POSTGRESQL BUG REPORT\n============================================================================\n\n\n\nYour name : Atanas Kolev\nYour email address : [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) : Intel Pentium II\n\n Operating System (example: Linux 2.0.26 ELF) : Red Hat Linux 5.2 ELF\nkernel 2.0.36-3\n\n PostgreSQL version (example: PostgreSQL-6.3.2) :\nPostgreSQL-6.3.2-10\n\n Compiler used (example: gcc 2.7.2) : gcc-2.7.2.3-14\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nI am building new data base named das and have created one table named\ndas_point. I have prepared text file with sequence of INSERT commands of\n\napproximately 5000 records, file named points_dbase.sql. When i try to\nrun it\nfrom psql interactive monitor using \\i points_dbase.sql 258 records are\nsuccessfully inserted and after that a lot of error message appears:\n\nquery buffer max length of 20000 exceeded\nquery line ignored\n4,\nquery buffer max length of 20000 exceeded\nquery line ignored\n0080,\nquery buffer max length of 20000 exceeded\nquery line ignored\n0001,\nquery buffer max length of 20000 exceeded\nquery line ignored\n 0.1,\nquery buffer max length of 20000 exceeded\nquery line ignored\n0001,\nquery buffer max length of 20000 exceeded\nquery line ignored\n -40,\nquery buffer max length of 20000 exceeded\nquery line ignored\n...\n...\n...\nquery buffer max length of 20000 exceeded\nquery line ignored\n'today'\nquery buffer max length of 20000 exceeded\nquery line ignored\n);\nPQexec() -- query is too long. Maximum length is 8191\nquery buffer max length of 20000 exceeded\nquery line ignored\nSegmentation fault (core dumped)\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible:\n----------------------------------------------------------------------\n\nfiles das_creation and points_dbase.sql are attached in mail message.\nThe sequence of commands i am executing is next:\n\ncreatedb das\npsql das\n\\i das_creation\n\\i points_dbase.sql\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\nDo not know.", "msg_date": "Tue, 04 May 1999 13:17:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Bug report.]" } ]
[ { "msg_contents": "> varchar-array.patch this patch adds support for arrays of bpchar() and\n> varchar(), which where always missing from postgres.\n\nFar be it from me to carp ... but I thought adding new features during\nbeta phase was frowned upon.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 09:51:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "varchar-array.patch applied" }, { "msg_contents": "> > this patch adds support for arrays of bpchar() and\n> > varchar(), which where always missing from postgres.\n> ... but I thought adding new features during\n> beta phase was frowned upon.\n\nThis one is probably in a grey area, since the feature was explicitly\ndisabled back in the *very* early days by Jolly, but with a comment\nfrom her that she wasn't sure it was necessary.\n\nIf we have a hint of trouble, we can back it out. It sure would be\nnice to have Massimo also contribute a patch to the relevant\nregression tests ;)\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 May 1999 14:40:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar-array.patch applied" }, { "msg_contents": "> > varchar-array.patch this patch adds support for arrays of bpchar() and\n> > varchar(), which where always missing from postgres.\n> \n> Far be it from me to carp ... but I thought adding new features during\n> beta phase was frowned upon.\n\nYes, but he sent it in before the freeze, and I was too busy to apply\nit. Seems only fair.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 May 1999 12:55:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar-array.patch applied" }, { "msg_contents": "> \n> > > this patch adds support for arrays of bpchar() and\n> > > varchar(), which where always missing from postgres.\n> > ... but I thought adding new features during\n> > beta phase was frowned upon.\n> \n> This one is probably in a grey area, since the feature was explicitly\n> disabled back in the *very* early days by Jolly, but with a comment\n> from her that she wasn't sure it was necessary.\n> \n> If we have a hint of trouble, we can back it out. It sure would be\n> nice to have Massimo also contribute a patch to the relevant\n> regression tests ;)\n> \n> - Tom\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\nThe patch was first submitted before the beta phase but never applied.\n\nIt adds features which replace other features which were dropped in 6.3,\nspecifically char4 and friends with related array types. I had many tables\nwith _char8 and _char16 fields which I couldn't use anymore.\n\nThe new code is never executed unless you try to to define arrays of\nvarchar or bpchar, so it shoud be safe for existing applications.\n\nAnyway I will try to write some regression tests.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Tue, 4 May 1999 21:27:02 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] varchar-array.patch appliedu" } ]
[ { "msg_contents": "If a CREATE INDEX fails, the backend returns to the main loop without\nhaving closed the temporary files that are created for sorting.\nAn easy example that provokes this is\n\n\tcreate table titles (adate date);\n\tinsert into titles values ('today');\n\tcreate index titles_f_ind on titles (date(adate) date_ops);\n\tERROR: SQL-language function not supported in this context.\n\nafter which the backend has about a dozen more open files than it had\nbefore.\n\nIf you then try to create another index, you will crash for\nlack of free file descriptors (unless your kernel has a\nhigher-than-usual open-files-per-process limit). In any case, the\nsort temp files will never get deleted from your database directory.\n\nOffhand I'm not sure how to fix this. The system knows about releasing\nmemory after an elog(ERROR), but does it have any provisions for\nreleasing other kinds of resources? I suspect we need something\nanalogous to the on_shmem_exit() callback-registration list, but I\ndon't know whether it already exists. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 11:07:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Nasty resource-leak problem in sort code" }, { "msg_contents": "I wrote:\n> If a CREATE INDEX fails, the backend returns to the main loop without\n> having closed the temporary files that are created for sorting.\n> ...\n> If you then try to create another index, you will crash for\n> lack of free file descriptors (unless your kernel has a\n> higher-than-usual open-files-per-process limit). In any case, the\n> sort temp files will never get deleted from your database directory.\n>\n> Offhand I'm not sure how to fix this. The system knows about releasing\n> memory after an elog(ERROR), but does it have any provisions for\n> releasing other kinds of resources?\n\nAfter further thought it seems that adding code to release temporary\nfiles at transaction abort is the best solution. I propose the\nfollowing fix:\n\n1. In fd.c, invent the notion of a \"temporary file\", ie one that fd.c\nknows (a) should be discarded when it is closed and (b) should not\noutlive the current transaction. Add a function TemporaryOpenFile()\nthat selects an appropriate temp file name, does the open, and marks\nthe resulting VFD as a temp file. FileClose will implicitly act like\nFileUnlink on this file.\n\n2. Add a hook to xact.c that calls fd.c to close and delete all\nremaining temporary files at transaction commit or abort.\n\n3. Change psort.c and nodeHash.c (anyplace else?) to use this facility\ninstead of their existing ad-hoc temp file code.\n\nThis will fix several problems, including (a) failure to release file\ndescriptors after an error; (b) failure to delete temp files after\nan error; (c) risk of running out of FDs if multiple sorts are invoked\nconcurrently (not sure if this can actually happen in the current state\nof the code). Centralizing the generation of temp file names in one\nplace seems like a good idea too.\n\nUnless I hear objections, I'll work on this this weekend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 May 1999 11:58:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nasty resource-leak problem in sort code " }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> After further thought it seems that adding code to release temporary\n> files at transaction abort is the best solution. I propose the\n> following fix:\n[ explanation snipped ]\n\nUhm, this all seems unnecessarily complicated. Shouldn't the process\nlook more like this:\nfp = open('tempfile');\nunlink('tempfile');\n\nThis way, when the file is closed, the space is freed. The only\ncomplication I can see is if backends need to share the file handle,\nor it needs to be re-opened. This works with all sorts of temp-file\nsituations.\n\nOf course, it's not NT safe, since I don't believe that NT provides\nfor deleting open files (NT file libs sucks.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "7 May 1999 12:18:32 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Nasty resource-leak problem in sort code " }, { "msg_contents": "[email protected] writes:\n> Uhm, this all seems unnecessarily complicated. Shouldn't the process\n> look more like this:\n> fp = open('tempfile');\n> unlink('tempfile');\n\nI did think of that, but it only solves the problem of making sure\nthe temp file goes away when you close it; it doesn't solve the problem\nof making sure that you close it. It's failure to release the file\ndescriptors that is causing backend crashes --- waste of diskspace is\nbad also, but it's not the critical issue IMHO. We *must* add cleanup\ncode to ensure that the FDs are closed after an abort; once we do that\nit's essentially no extra code to unlink the files at the same time.\n\nDoing the unlink right away would ensure that the temp file disappears\nif the backend crashes before it gets to transaction commit/abort.\nHowever, I regard that as a bad thing not a good thing, because it\nwould complicate debugging --- you might want to be able to see what\nhad been in the temp files. You normally have to clean up a core file\nto recover diskspace after a backend crash, so having to delete temp\nfiles too doesn't seem like a big shortcoming.\n\n> Of course, it's not NT safe, since I don't believe that NT provides\n> for deleting open files (NT file libs sucks.\n\nDon't think it works over NFS mounts, either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 May 1999 14:08:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Nasty resource-leak problem in sort code " }, { "msg_contents": "> This will fix several problems, including (a) failure to release file\n> descriptors after an error; (b) failure to delete temp files after\n> an error; (c) risk of running out of FDs if multiple sorts are invoked\n> concurrently (not sure if this can actually happen in the current state\n> of the code). Centralizing the generation of temp file names in one\n> place seems like a good idea too.\n> \n> Unless I hear objections, I'll work on this this weekend.\n\nThis sounds like a good plan.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 May 1999 19:01:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Nasty resource-leak problem in sort code" }, { "msg_contents": "> Uhm, this all seems unnecessarily complicated. Shouldn't the process\n> look more like this:\n> fp = open('tempfile');\n> unlink('tempfile');\n> \n> This way, when the file is closed, the space is freed. The only\n> complication I can see is if backends need to share the file handle,\n> or it needs to be re-opened. This works with all sorts of temp-file\n> situations.\n> \n> Of course, it's not NT safe, since I don't believe that NT provides\n> for deleting open files (NT file libs sucks.\n\nTwo problems. First, we support NT, so we have to behave a little bit\nto keep it happy. Second, Tom is concerned about leaking file handles,\nnot just the files themselves. He needs to call close() to release them\non transaction aborts.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 May 1999 19:03:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Nasty resource-leak problem in sort code]" } ]
[ { "msg_contents": "\nHi,\n\nI'm quite shocked, I hope this is dream:\n\n> psql cs\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.5]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: cs\n\n\ncs=> select envelope from recipient where envelope=510349;\nenvelope\n--------\n 88320\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n 510349\n(12 rows)\n\nTo my understanding the first should have been *never* selected. \n\nI had a strange problem tonight, where the backends stopped working\nsaying something like this\n\nUPDATE waiting\nINSERT waiting\n\ndead locks? how can these happen? killed some backends, and restarted\nthe server. Seems part of the db are corrupted now. Back to 6.4.2?\n\nDirk\n", "msg_date": "Tue, 4 May 1999 17:30:06 +0200 (CEST)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "major flaw in 6.5beta1??? (UPDATE/INSERT waiting)" }, { "msg_contents": "Dirk Lutzebaeck <[email protected]> writes:\n> cs=> select envelope from recipient where envelope=510349;\n> [ returns a tuple that obviously fails the WHERE condition ]\n\nYipes. Do you have an index on the envelope field, and if so is\nit being used for this query? (Use EXPLAIN to check.) My guess\nis that the index is corrupted. Dropping and recreating the index\nwould probably set things right.\n\nOf course the real issue is how it got corrupted. Hiroshi found\nan important bug in btree a few days ago, and there is a discussion\ngoing on right now about lock-manager bugs that might possibly allow\nmultiple backends to corrupt data that they're concurrently updating.\nBut I have no idea if either of those explains your problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 12:20:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] major flaw in 6.5beta1??? (UPDATE/INSERT waiting) " }, { "msg_contents": "Tom Lane writes:\n > Dirk Lutzebaeck <[email protected]> writes:\n > > cs=> select envelope from recipient where envelope=510349;\n > > [ returns a tuple that obviously fails the WHERE condition ]\n > \n > Yipes. Do you have an index on the envelope field, and if so is\n > it being used for this query? (Use EXPLAIN to check.) My guess\n > is that the index is corrupted. Dropping and recreating the index\n > would probably set things right.\n\nYes, thanks, recreating the index cures the problem.\n\n > Of course the real issue is how it got corrupted. Hiroshi found\n > an important bug in btree a few days ago, and there is a discussion\n > going on right now about lock-manager bugs that might possibly allow\n > multiple backends to corrupt data that they're concurrently updating.\n > But I have no idea if either of those explains your problem.\n\nDoes this mean they can deadlock themselves? Is this also true for\n6.4.2? I probably switch back then.\n\nThanks, Dirk\n", "msg_date": "Wed, 5 May 1999 09:30:35 +0200 (CEST)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] major flaw in 6.5beta1??? (UPDATE/INSERT waiting) " }, { "msg_contents": "Dirk Lutzebaeck writes:\n > Tom Lane writes:\n > > Dirk Lutzebaeck <[email protected]> writes:\n > > > cs=> select envelope from recipient where envelope=510349;\n > > > [ returns a tuple that obviously fails the WHERE condition ]\n > > \n > > Yipes. Do you have an index on the envelope field, and if so is\n > > it being used for this query? (Use EXPLAIN to check.) My guess\n > > is that the index is corrupted. Dropping and recreating the index\n > > would probably set things right.\n > \n > Yes, thanks, recreating the index cures the problem.\n\nHere is some more info: the automatic vacuum tonight gave the\nfollowing errors:\n\nvacuum analyze;\nNOTICE: Index recipient_oid_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_addr_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_mem_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_env_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_oid_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_addr_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_mem_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nNOTICE: Index recipient_env_index: NUMBER OF INDEX' TUPLES (1474) IS NOT THE SAME AS HEAP' (1473)\nVACUUM\n", "msg_date": "Wed, 5 May 1999 09:54:26 +0200 (CEST)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] major flaw in 6.5beta1??? (UPDATE/INSERT waiting) " } ]
[ { "msg_contents": "i do have a few fields (combo boxes based on a query) that sorted on a \ncalculated field -- but they seem to work fine in other forms. --don't \nthink that was it. but, based on your postulations i re-did a few of the \nqueries to NOT sort on the calculated fields. still bombed though. . . \n so, i deleted the link to the particular table, and then re-linked to it, \nand to my surprise, i can now pull up the form without it bombing. i can \neven zip through all the records (36k of them) if i wish. but now it still \nhas a problem -- the recordset is not updateable. if i open the table \ndirectly (not through any query, and again, from within access97, pgsql \n6.4.2), the table itself is not updateable. the table DOES have a primary \nkey -- a DUAL FIELD primary key. I found over the last few weeks that \nAccess97 will treat any ODBC table without a primary key defined as not \nupdateable, but will it also do the same for DUAL FIELD primary key ODBC \ntables?? it appears so, but can anyone confirm it??\n\njt\n\n-----Original Message-----\nFrom:\tJose Soares [SMTP:[email protected]]\nSent:\tTuesday, May 04, 1999 8:42 AM\nTo:\tJT Kirkpatrick; hackers\nSubject:\tRe: [INTERFACES] error message\n\nJT Kirkpatrick ha scritto:\n\n> hi, i'm using access97 linked to postgres(6.4.2) tables through the new\n> v.6.4 odbc. i can open a form, it shows me data for an initial record, \nand\n> then bombs. here is the message in the log file -- i can't figure out \nwhy\n> it is bombing. does anyone have a clue?? do those \"-\" or \"/\" in various\n> \"vinvnum\" fields cause problems?? it shows valid data first, waits for a\n> second, and then bombs!\n>\n\nI had a similar error when I tried to order retrieved data by a field not \nin\nthe table or a calculated field.\nSeems that Access request an order by a field with an unknown type.\nI can emulate a similar message as:\n\nselect 'AAAAAA' union select 'ZZZZZZ' order by 1 asc;\nERROR: Unable to identify a binary operator '>' for types unknown and \nunknown\n\nselect 'aaaaaa' union select 'zzzzzz' order by 1;\nERROR: Unable to identify a binary operator '<' for types unknown and \nunknown\n\nMay be we need a default for UNKNOWN types (what do you think Thomas, if we\nmake unknown type = text type?)\n\nAny way. Try these functions:\n\ncreate function unknown_lt(unknown,unknown) returns bool as\n'declare\n i1 text;\n i2 text;\nbegin\n i1:= $1;\n i2:= $2;\n return (i1 < i2);\nend; ' language 'plpgsql';\nCREATE\n\ncreate operator < (\n leftarg=unknown,\n rightarg=unknown,\n procedure=unknown_lt,\n commutator='<',\n negator='>=',\n restrict=eqsel,\n join=eqjoinsel\n );\nCREATE\n\ncreate function unknown_gt(unknown,unknown) returns bool as\n'declare\n i1 text;\n i2 text;\nbegin\n i1:= $1;\n i2:= $2;\n return (i1 > i2);\nend; ' language 'plpgsql';\nCREATE\ncreate operator > (\n leftarg=unknown,\n rightarg=unknown,\n procedure=unknown_gt,\n commutator='>',\n negator='<=',\n restrict=eqsel,\n join=eqjoinsel\n );\nCREATE\n\nselect 'AAAAAA' union select 'ZZZZZZ' order by 1 asc;\n?column?\n--------\nAAAAAA\nZZZZZZ\n(2 rows)\n\nselect 'aaaaaa' union select 'zzzzzz' order by 1 desc;\n?column?\n--------\nzzzzzz\naaaaaa\n(2 rows)\n\nEOF\n\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n", "msg_date": "Tue, 4 May 1999 11:37:22 -0400", "msg_from": "JT Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [INTERFACES] error message" }, { "msg_contents": "JT Kirkpatrick ha scritto:\n\n> i do have a few fields (combo boxes based on a query) that sorted on a\n> calculated field -- but they seem to work fine in other forms. --don't\n> think that was it. but, based on your postulations i re-did a few of the\n> queries to NOT sort on the calculated fields. still bombed though. . .\n> so, i deleted the link to the particular table, and then re-linked to it,\n> and to my surprise, i can now pull up the form without it bombing. i can\n> even zip through all the records (36k of them) if i wish. but now it still\n> has a problem -- the recordset is not updateable. if i open the table\n> directly (not through any query, and again, from within access97, pgsql\n> 6.4.2), the table itself is not updateable. the table DOES have a primary\n> key -- a DUAL FIELD primary key. I found over the last few weeks that\n> Access97 will treat any ODBC table without a primary key defined as not\n> updateable, but will it also do the same for DUAL FIELD primary key ODBC\n> tables?? it appears so, but can anyone confirm it??\n\nI'm using also access97 with postodbc 6.40.0005 and PostgreSQL v6.5beta1 and it\nworks fine and I have primary\nkeys like:\n\nTable = figure_pkey\n+----------------------------------+----------------------------------+-------+\n\n| Field | Type | Length|\n\n+----------------------------------+----------------------------------+-------+\n\n| azienda | char() | 16 |\n\n| tipo | char() | 2 |\n\n| gruppo | int4 | 4 |\n\n| inizio_attivita | date | 4 |\n\n+----------------------------------+----------------------------------+-------+\n\nWhich version of psqlodbc are you using?\nI know that Access uses dynaset or snapshots to connect to database.\n- If you attach a table without define a primary key Access connects to the\ntable as snapshot and you can't write table.\n- If you define a primary key then Access connects to the table as dynaset and\nyou are able to write table.\n- You must define primary key during table linkage time.\n- If you have the ODBC manager option RECOGNIZE UNIQUE INDEXES checked then you\ndon't need define it\n because Access recognize the primary key automatically.\n- If you have the ODBC manager option READ ONLY checked then all linked tables\nare read only.\n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'\n\n\n\n \nJT Kirkpatrick ha scritto:\ni do have a few fields (combo boxes based on a query)\nthat sorted on a\ncalculated field -- but they seem to work fine in other forms. \n--don't\nthink that was it.  but, based on your postulations i re-did a\nfew of the\nqueries to NOT sort on the calculated fields.  still bombed though.\n. .\n so, i deleted the link to the particular table, and then re-linked\nto it,\nand to my surprise, i can now pull up the form without it bombing. \ni can\neven zip through all the records (36k of them) if i wish.  but\nnow it still\nhas a problem -- the recordset is not updateable.  if i open the\ntable\ndirectly (not through any query, and again, from within access97, pgsql\n6.4.2), the table itself is not updateable.  the table DOES have\na primary\nkey -- a DUAL FIELD primary key.  I found over the last few weeks\nthat\nAccess97 will treat any ODBC table without a primary key defined as\nnot\nupdateable, but will it also do the same for DUAL FIELD primary key\nODBC\ntables??  it appears so, but can anyone confirm it??\nI'm using also access97 with postodbc 6.40.0005 and PostgreSQL v6.5beta1\nand it works fine and I have primary\nkeys like:\nTable    = figure_pkey\n+----------------------------------+----------------------------------+-------+\n|             \nField              \n|             \nType               \n| Length|\n+----------------------------------+----------------------------------+-------+\n| azienda                         \n| char()                          \n|    16 |\n| tipo                            \n| char()                          \n|     2 |\n| gruppo                          \n| int4                            \n|     4 |\n| inizio_attivita                 \n| date                            \n|     4 |\n+----------------------------------+----------------------------------+-------+\nWhich version of psqlodbc are you using?\nI know that Access uses dynaset or snapshots to connect to database.\n- If you attach a table without define a primary key Access connects\nto the table as snapshot and you can't write table.\n- If you define a primary key then Access connects to the table as\ndynaset and you are able to write table.\n- You must define primary key during table linkage time.\n- If you have the ODBC manager option RECOGNIZE UNIQUE INDEXES checked\nthen you don't need define it\n  because Access recognize the primary key automatically.\n- If you have the ODBC manager option READ ONLY checked then all linked\ntables are read only.\n--\n______________________________________________________________\nPostgreSQL 6.5.0 on i586-pc-linux-gnu, compiled by gcc 2.7.2.3\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nJose'", "msg_date": "Wed, 05 May 1999 15:32:29 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [INTERFACES] error message" } ]
[ { "msg_contents": "I want to change hashjoin's use of a fixed-size overflow area for tuples\nthat don't fit into the hashbucket they ought to go in. Since it's\nalways possible for an improbably large number of tuples to hash into the\nsame hashbucket, the overflow area itself can overflow; without the\nability to recover from that, hashjoin is inherently unreliable.\nSo I think this is an important thing to fix.\n\nTo do this, I need to be able to allocate chunks of space that I will\nlater want to give back all at once (at the end of a hash pass).\nSeems to me like a job for palloc and a special memory context ---\nbut I see no way in mcxt.h to create a new memory context. How do\nI do that? Also, I'd want the new context to be a \"sub-context\" of\nthe regular execution context, in the sense that it should automatically\nget released if we exit via elog(ERROR). What are the appropriate\ncalls to be using for this? If there's documentation about this stuff,\nI haven't found it :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 May 1999 12:07:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Advice wanted on backend memory management" }, { "msg_contents": "Tom Lane wrote:\n> \n> I want to change hashjoin's use of a fixed-size overflow area for tuples\n> that don't fit into the hashbucket they ought to go in. Since it's\n> always possible for an improbably large number of tuples to hash into the\n> same hashbucket, the overflow area itself can overflow; without the\n> ability to recover from that, hashjoin is inherently unreliable.\n> So I think this is an important thing to fix.\n> \n> To do this, I need to be able to allocate chunks of space that I will\n> later want to give back all at once (at the end of a hash pass).\n> Seems to me like a job for palloc and a special memory context ---\n> but I see no way in mcxt.h to create a new memory context. How do\n> I do that? Also, I'd want the new context to be a \"sub-context\" of\n\nNo way :(\nStartPortalAllocMode could help but - portalmem.c:\n/*\n * StartPortalAllocMode \n * Starts a new block of portal heap allocation using mode and limit;\n * the current block is disabled until EndPortalAllocMode is called.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nI'm unhappy with this allocation block stacking for quite long time :(\n\nTry to pfree chunks \"by hand\".\n\nVadim\n", "msg_date": "Wed, 05 May 1999 09:18:13 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Advice wanted on backend memory management" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Try to pfree chunks \"by hand\".\n\nYeah, that's what I'm trying to avoid. That would basically mean\nduplicating the logic that's in aset.c, which is pretty silly...\n\nAfter some more looking around, it looks like I could create a\n\"portal\" as is done in vacuum or spi. But portals seem to have\na heckuva lot of features that I don't understand the uses for.\nAnyone have any comments or documentation about them?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 May 1999 10:01:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Advice wanted on backend memory management " } ]
[ { "msg_contents": "\nI don't know what's happening, but there's an unbelivable mess in the mirroring\nsystem. For the last two weeks, I'm receiving tens of MB of duplicate messages\nin the mhonarc archive. The mirror has reached 1 GB and is still growing. I\nthink the problem is to be searched in mhonarc...\n\n/html/mhonarc/pgsql-bugs has reached 89 MB in size and I don't belive postgres\nhas so many bugs :^)\n\nI did a ls -lS in /html/mhonarc/pgsql-bugs/1998-11, and this is a list of all\nthe duplicates of the biggest message:\n\n-rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00027.html\n-rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00399.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 24 11:01 msg00058.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 25 11:01 msg00089.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 26 11:01 msg00120.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 27 11:01 msg00151.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 28 11:01 msg00182.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 29 11:01 msg00213.html\n-rw-r--r-- 1 mirror mirror 157501 Apr 30 11:01 msg00244.html\n-rw-r--r-- 1 mirror mirror 157501 May 1 11:01 msg00275.html\n-rw-r--r-- 1 mirror mirror 157501 May 2 11:01 msg00306.html\n-rw-r--r-- 1 mirror mirror 157501 May 3 11:01 msg00337.html\n-rw-r--r-- 1 mirror mirror 157501 May 4 11:02 msg00368.html\n\nPlease, do something ASAP, the partition I reserved for postgres mirror is not\nvery big and it will be filled soon.... not to talk about the (very) expensive\nitalian bandwidth I'm wasting :^)\n\nThanks!\n\n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n", "msg_date": "Tue, 04 May 1999 20:14:24 +0200", "msg_from": "Daniele Orlandi <[email protected]>", "msg_from_op": true, "msg_subject": "Mirror mess... (urgent)" }, { "msg_contents": "I also noticed that ! I think mirror site doesn't need mhonarc archive\nbecause search interface to it works only on master site. It's not \nvery difficult to setup rsync.\n\n\tOleg\n\nOn Tue, 4 May 1999, Daniele Orlandi wrote:\n\n> Date: Tue, 04 May 1999 20:14:24 +0200\n> From: Daniele Orlandi <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Mirror mess... (urgent)\n> \n> \n> I don't know what's happening, but there's an unbelivable mess in the mirroring\n> system. For the last two weeks, I'm receiving tens of MB of duplicate messages\n> in the mhonarc archive. The mirror has reached 1 GB and is still growing. I\n> think the problem is to be searched in mhonarc...\n> \n> /html/mhonarc/pgsql-bugs has reached 89 MB in size and I don't belive postgres\n> has so many bugs :^)\n> \n> I did a ls -lS in /html/mhonarc/pgsql-bugs/1998-11, and this is a list of all\n> the duplicates of the biggest message:\n> \n> -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00027.html\n> -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00399.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 24 11:01 msg00058.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 25 11:01 msg00089.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 26 11:01 msg00120.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 27 11:01 msg00151.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 28 11:01 msg00182.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 29 11:01 msg00213.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 30 11:01 msg00244.html\n> -rw-r--r-- 1 mirror mirror 157501 May 1 11:01 msg00275.html\n> -rw-r--r-- 1 mirror mirror 157501 May 2 11:01 msg00306.html\n> -rw-r--r-- 1 mirror mirror 157501 May 3 11:01 msg00337.html\n> -rw-r--r-- 1 mirror mirror 157501 May 4 11:02 msg00368.html\n> \n> Please, do something ASAP, the partition I reserved for postgres mirror is not\n> very big and it will be filled soon.... not to talk about the (very) expensive\n> italian bandwidth I'm wasting :^)\n> \n> Thanks!\n> \n> -- \n> Daniele\n> \n> -------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> -------------------------------------------------------------------------------\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 4 May 1999 22:22:19 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mirror mess... (urgent)" }, { "msg_contents": "On Tue, 4 May 1999, Oleg Bartunov wrote:\n\n> I also noticed that ! I think mirror site doesn't need mhonarc archive\n> because search interface to it works only on master site. It's not \n> very difficult to setup rsync.\n\nI'm 50-50 on this right now...how many ppl look at the archives without\nthe search engine? If nobody, then having them on the mirror site is, in\nfact useless...if ppl do puruse the archives without using the search\nengine, then it is useful to have them on the mirror site...\n\n\n\n > \tOleg\n> \n> On Tue, 4 May 1999, Daniele Orlandi wrote:\n> \n> > Date: Tue, 04 May 1999 20:14:24 +0200\n> > From: Daniele Orlandi <[email protected]>\n> > To: [email protected]\n> > Subject: [HACKERS] Mirror mess... (urgent)\n> > \n> > \n> > I don't know what's happening, but there's an unbelivable mess in the mirroring\n> > system. For the last two weeks, I'm receiving tens of MB of duplicate messages\n> > in the mhonarc archive. The mirror has reached 1 GB and is still growing. I\n> > think the problem is to be searched in mhonarc...\n> > \n> > /html/mhonarc/pgsql-bugs has reached 89 MB in size and I don't belive postgres\n> > has so many bugs :^)\n> > \n> > I did a ls -lS in /html/mhonarc/pgsql-bugs/1998-11, and this is a list of all\n> > the duplicates of the biggest message:\n> > \n> > -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00027.html\n> > -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00399.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 24 11:01 msg00058.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 25 11:01 msg00089.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 26 11:01 msg00120.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 27 11:01 msg00151.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 28 11:01 msg00182.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 29 11:01 msg00213.html\n> > -rw-r--r-- 1 mirror mirror 157501 Apr 30 11:01 msg00244.html\n> > -rw-r--r-- 1 mirror mirror 157501 May 1 11:01 msg00275.html\n> > -rw-r--r-- 1 mirror mirror 157501 May 2 11:01 msg00306.html\n> > -rw-r--r-- 1 mirror mirror 157501 May 3 11:01 msg00337.html\n> > -rw-r--r-- 1 mirror mirror 157501 May 4 11:02 msg00368.html\n> > \n> > Please, do something ASAP, the partition I reserved for postgres mirror is not\n> > very big and it will be filled soon.... not to talk about the (very) expensive\n> > italian bandwidth I'm wasting :^)\n> > \n> > Thanks!\n> > \n> > -- \n> > Daniele\n> > \n> > -------------------------------------------------------------------------------\n> > Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> > Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> > -------------------------------------------------------------------------------\n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 May 1999 20:24:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mirror mess... (urgent)" }, { "msg_contents": "\nthanks for pointing this out...lookign into it right now :(\n\n\nOn Tue, 4 May 1999, Daniele Orlandi wrote:\n\n> \n> I don't know what's happening, but there's an unbelivable mess in the mirroring\n> system. For the last two weeks, I'm receiving tens of MB of duplicate messages\n> in the mhonarc archive. The mirror has reached 1 GB and is still growing. I\n> think the problem is to be searched in mhonarc...\n> \n> /html/mhonarc/pgsql-bugs has reached 89 MB in size and I don't belive postgres\n> has so many bugs :^)\n> \n> I did a ls -lS in /html/mhonarc/pgsql-bugs/1998-11, and this is a list of all\n> the duplicates of the biggest message:\n> \n> -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00027.html\n> -rw-r--r-- 1 mirror mirror 157579 May 4 11:02 msg00399.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 24 11:01 msg00058.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 25 11:01 msg00089.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 26 11:01 msg00120.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 27 11:01 msg00151.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 28 11:01 msg00182.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 29 11:01 msg00213.html\n> -rw-r--r-- 1 mirror mirror 157501 Apr 30 11:01 msg00244.html\n> -rw-r--r-- 1 mirror mirror 157501 May 1 11:01 msg00275.html\n> -rw-r--r-- 1 mirror mirror 157501 May 2 11:01 msg00306.html\n> -rw-r--r-- 1 mirror mirror 157501 May 3 11:01 msg00337.html\n> -rw-r--r-- 1 mirror mirror 157501 May 4 11:02 msg00368.html\n> \n> Please, do something ASAP, the partition I reserved for postgres mirror is not\n> very big and it will be filled soon.... not to talk about the (very) expensive\n> italian bandwidth I'm wasting :^)\n> \n> Thanks!\n> \n> -- \n> Daniele\n> \n> -------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> -------------------------------------------------------------------------------\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 May 1999 20:26:43 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mirror mess... (urgent)" } ]