threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "\n\nHi,\n\nI already asked this on pgsql-general and didn't get any responses.\n\nI'm using PostgreSQL v6.5beta1 (mainly because it said it supports the INTERSECT\ncommand) and iodbc 2.50.2. I've compiled everything ok and I can start the\npostmaster and issue commands from psql. However, when I try to connect to the\ndatabase using an ODBC program I get the following error from the postmaster:\n\npq_recvbuf: unexpected EOF on client connection\n\nAny ideas as to what might cause this?\n\nThanks,\nRich\n\n\n",
"msg_date": "Tue, 4 May 1999 16:25:33 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "ODBC with postgresql v6.5beta1"
}
] |
[
{
"msg_contents": "Though I did revert back to 6.3.2, I ran into a rather bothering\ncore dump when running psql after installing 6.4.2. I'm on a\nglibc2.0.7pre6 linux 2.2.7 box, and postgres was configured in the\nplainest configuration, but it crashes right out of the box. I've seen\nsome messages before on the matter, but nothing conclusive. Have other\npeople/developers ran into this before? I'm thinking 'glibc problem'.\n\nThanks,\ncr\n\n",
"msg_date": "Tue, 4 May 1999 21:23:02 -0300 (EST)",
"msg_from": "Christian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.4.2 core dumping."
}
] |
[
{
"msg_contents": "\nI *think* this has gone around already, but trying to dump one v6.4.2\ndatabase into another v6.4.2 server, the load failed with:\n\nCREATE RULE \"_RETreg_view\" AS ON SELECT TO \"reg_view\" DO INSTEAD SELECT\n\"p\".\"userid\", \"r\".\"first_name\" || ' '::\"text\" || \"r\".\"last_name\",\n\"p\".\"c_n\", \"p\".\"ps\", \"p\".\"mgmt\" FROM \"registration\" \"r\", \"password\" \"p\"\nWHERE \"r\".\"userid\" = \"p\".\"userid\";\nERROR: parser: parse error at or near \"||\"\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 4 May 1999 22:33:08 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "|| in v6.4.2 ..."
},
{
"msg_contents": "> \n> I *think* this has gone around already, but trying to dump one v6.4.2\n> database into another v6.4.2 server, the load failed with:\n> \n> CREATE RULE \"_RETreg_view\" AS ON SELECT TO \"reg_view\" DO INSTEAD SELECT\n> \"p\".\"userid\", \"r\".\"first_name\" || ' '::\"text\" || \"r\".\"last_name\",\n> \"p\".\"c_n\", \"p\".\"ps\", \"p\".\"mgmt\" FROM \"registration\" \"r\", \"password\" \"p\"\n> WHERE \"r\".\"userid\" = \"p\".\"userid\";\n> ERROR: parser: parse error at or near \"||\"\n\nThis is fixed in 6.5 beta.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 4 May 1999 21:42:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] || in v6.4.2 ..."
}
] |
[
{
"msg_contents": "Hi,\n\nI have two patches for 6.5.0:\n\narrayfuncs.patch\tfixes a small bug in my previous patches for arrays\n\narray-regress.patch\tadds _bpchar and _varchar to regression tests\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+",
"msg_date": "Wed, 5 May 1999 10:11:53 +0200 (MET DST)",
"msg_from": "Massimo Dal Zotto <[email protected]>",
"msg_from_op": true,
"msg_subject": "new patches"
},
{
"msg_contents": "Applied.\n\n\n[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> I have two patches for 6.5.0:\n> \n> arrayfuncs.patch\tfixes a small bug in my previous patches for arrays\n> \n> array-regress.patch\tadds _bpchar and _varchar to regression tests\n> \n> -- \n> Massimo Dal Zotto\n> \n> +----------------------------------------------------------------------+\n> | Massimo Dal Zotto email: [email protected] |\n> | Via Marconi, 141 phone: ++39-0461534251 |\n> | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n> | Italy pgp: finger [email protected] |\n> +----------------------------------------------------------------------+\n> \n\n[application/octet-stream is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 May 1999 17:36:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] new patches"
}
] |
[
{
"msg_contents": "\n\nPrimary, Sorry for my english !!!\n\n\nHi,\n\nI �m developing a several test database, for study. But I can�t create\nany function, every time ERROR appears.\nI write a C code in a plus2.c file\n\n#include <pgsql/postgres.h>\n\nint4 plus2(int4 a,int4 b)\n{\nreturn(a+b);\n}\n\nmain() /* For getting *.out*/\n{\n}\n\n, then I compile it (I made this of two ways)\n\ncc -c plus2.c /* I get the object file *.o */\n\nor\n\ncc plus2.c plus2.out /*I get the *.out file */\n\nWhen I create the function on postgresql, occurs this\n\ndb=> create function plus2(int4,int4) returns int4 as\n'/var/lib/pgsql/plus2.o' language 'c';\nCREATE\n\nor\n\ndb=> create function plus2(int4,int4) returns int4 as\n'/var/lib/pgsql/plus2.out' language 'c';\nCREATE\n\nWherever, when I call the function plus2 from Sql sentence ...\n\ndb=> select plus2(6,6);\nERROR: Load of file /var/lib/pgsql/plus2.o failed:\n(�@(�@/pgsql/plus2.o: ELF file's phentsize not the expected size\n\nor, with plus2.out\n\ndb=> select plus2(6,6);\nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\nThis probably means the backend terminated abnormally before or while\nprocessing the request.\n\nIf anybody can help me, thank !!!\n\nPD: I don�t disturb you, if I don�t need it really.\n\n\n\n",
"msg_date": "Wed, 05 May 1999 03:44:01 -0500",
"msg_from": "Carlos Peralta Ramirez <[email protected]>",
"msg_from_op": true,
"msg_subject": "which guru know this ??"
}
] |
[
{
"msg_contents": "www2(root)/usr/local/pgsql/src/src/backend/port/dynloader>\ndiff freebsd.c freebsd.c.orig \n71c71\n< sprintf(error_message, \"dlopen '%s' failed. (%s)\", file, dlerror() );\n---\n> sprintf(error_message, \"dlopen (%s) failed\", file);\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Wed, 05 May 1999 16:50:02 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small improvement"
},
{
"msg_contents": "Applied.\n\n[Charset KOI8-R unsupported, filtering to ASCII...]\n> www2(root)/usr/local/pgsql/src/src/backend/port/dynloader>\n> diff freebsd.c freebsd.c.orig \n> 71c71\n> < sprintf(error_message, \"dlopen '%s' failed. (%s)\", file, dlerror() );\n> ---\n> > sprintf(error_message, \"dlopen (%s) failed\", file);\n> \n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * there will come soft rains ...\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 May 1999 09:51:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Small improvement"
}
] |
[
{
"msg_contents": "One of my customer unable to install plpgsql\nwith next message:\n\nwww2(dms)~>psql -d www -c \"select tst(); \"\nERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen\n'/usr/local/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so:\nUndefined symbol \"SPI_tuptable\")\nwww2(dms)~>\n\nwhat does it mean?\n\nEnv:\nFreeBSD www2.sptimes.ru 3.1-RELEASE FreeBSD 3.1-RELEASE #4: Tue Mar 23 13:18:41\nMSK 1999 [email protected]:/usr/src/sys/compile/SPTIMES i386\n\nPostgres 6.4.2 release\n\nPS:\nHow about adding something like -a key to pg_version to report full\nrelease and compile information?\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Wed, 05 May 1999 16:58:57 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "What does it mean?"
},
{
"msg_contents": ">\n> One of my customer unable to install plpgsql\n> with next message:\n>\n> www2(dms)~>psql -d www -c \"select tst(); \"\n> ERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen\n> '/usr/local/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so:\n> Undefined symbol \"SPI_tuptable\")\n> www2(dms)~>\n>\n> what does it mean?\n>\n> Env:\n> FreeBSD www2.sptimes.ru 3.1-RELEASE FreeBSD 3.1-RELEASE #4: Tue Mar 23 13:18:41\n> MSK 1999 [email protected]:/usr/src/sys/compile/SPTIMES i386\n>\n> Postgres 6.4.2 release\n\n Doesn't look to me like a v6.4.2. SPI_tuptable is a global\n pointer which is defined in .../backend/executor/spi.c.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Thu, 6 May 1999 14:24:02 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] What does it mean?"
},
{
"msg_contents": " One of my customer unable to install plpgsql\nwith next message:\n \nwww2(dms)~>psql -d www -c \"select tst(); \"\nERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen\n'/usr/local/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so:\nUndefined symbol \"SPI_tuptable\")\nwww2(dms)~>\n\nThis problem exists onlty on FreeBSD 3.1\nthe same sources build on 2.2.8 works properly.\n \nwhat does it mean?\n \nEnv:\n FreeBSD www2.sptimes.ru 3.1-RELEASE FreeBSD 3.1-RELEASE #4: Tue Mar 23\n 13:18:41\n MSK 1999 [email protected]:/usr/src/sys/compile/SPTIMES i386\n \nPostgres 6.4.2 release\n \nPS:\n How about adding something like -a key to pg_version to report full\n release and compile information?\n \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Sun, 09 May 1999 19:23:53 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem installing plpgsql"
},
{
"msg_contents": "Dmitry Samersoff wrote:\n\n>\n> One of my customer unable to install plpgsql\n> with next message:\n>\n> www2(dms)~>psql -d www -c \"select tst(); \"\n> ERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen\n> '/usr/local/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so:\n> Undefined symbol \"SPI_tuptable\")\n> www2(dms)~>\n>\n> This problem exists onlty on FreeBSD 3.1\n> the same sources build on 2.2.8 works properly.\n>\n> what does it mean?\n\n It means that the dynamic loader isn't able to resolve a\n reference to the global symbol \"SPI_tuptable\" from the\n PL/pgSQL shared object into the backend. The symbol\n \"SPI_tuptable\" is declared in .../src/backend/executor/spi.c\n as\n\n DLLIMPORT SPITupleTable *SPI_tuptable;\n\n Since this symbol is referenced from another place in the\n backend's static code (in ruleutils.c) I'm pretty sure the\n symbol is there. It must be a problem with the FreeBSD 3.1\n dynamic loader.\n\n>\n> Env:\n> FreeBSD www2.sptimes.ru 3.1-RELEASE FreeBSD 3.1-RELEASE #4: Tue Mar 23\n> 13:18:41\n> MSK 1999 [email protected]:/usr/src/sys/compile/SPTIMES i386\n>\n> Postgres 6.4.2 release\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 10 May 1999 17:11:31 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem installing plpgsql"
},
{
"msg_contents": "On 10-May-99 Jan Wieck wrote:\n> Dmitry Samersoff wrote:\n> \n>>\n>> One of my customer unable to install plpgsql\n>> with next message:\n>>\n>> www2(dms)~>psql -d www -c \"select tst(); \"\n>> ERROR: Load of file /usr/local/pgsql/lib/plpgsql.so failed: dlopen\n>> '/usr/local/pgsql/lib/plpgsql.so' failed. (/usr/local/pgsql/lib/plpgsql.so:\n>> Undefined symbol \"SPI_tuptable\")\n>> www2(dms)~>\n>>\n>> This problem exists onlty on FreeBSD 3.1\n\n> Since this symbol is referenced from another place in the\n> backend's static code (in ruleutils.c) I'm pretty sure the\n> symbol is there. It must be a problem with the FreeBSD 3.1\n> dynamic loader.\n\nOK, I see.\n\nDoes any body have the same problem and is there known solution?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Tue, 11 May 1999 10:00:09 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Problem installing plpgsql"
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis.\n\n\n\n\tFrom:\tThe Hermit Hacker <scrappy @ hub.org> on 05/04/99 07:24 PM\n\tTo:\tOleg Bartunov <oleg @ sai.msu.su>@SMTP@EXCHANGE\n\tcc:\tDaniele Orlandi <daniele @ orlandi.com>@SMTP@EXCHANGE,\npgsql-hackers @ postgreSQL.org@SMTP@EXCHANGE \n\tSubject:\tRe: [HACKERS] Mirror mess... (urgent)\n\n\tOn Tue, 4 May 1999, Oleg Bartunov wrote:\n\n\t> I also noticed that ! I think mirror site doesn't need mhonarc\narchive\n\t> because search interface to it works only on master site. It's not\n\n\t> very difficult to setup rsync.\n\n\tI'm 50-50 on this right now...how many ppl look at the archives\nwithout\n\tthe search engine? If nobody, then having them on the mirror site\nis, in\n\tfact useless...if ppl do puruse the archives without using the\nsearch\n\tengine, then it is useful to have them on the mirror site...\n\n\n\n\t > \tOleg\n\t> \n\t> On Tue, 4 May 1999, Daniele Orlandi wrote:\n\t> \n\t> > Date: Tue, 04 May 1999 20:14:24 +0200\n\t> > From: Daniele Orlandi <[email protected]>\n\t> > To: [email protected]\n\t> > Subject: [HACKERS] Mirror mess... (urgent)\n\t> > \n\t> > \n\t> > I don't know what's happening, but there's an unbelivable mess\nin the mirroring\n\t> > system. For the last two weeks, I'm receiving tens of MB of\nduplicate messages\n\t> > in the mhonarc archive. The mirror has reached 1 GB and is still\ngrowing. I\n\t> > think the problem is to be searched in mhonarc...\n\t> > \n\t> > /html/mhonarc/pgsql-bugs has reached 89 MB in size and I don't\nbelive postgres\n\t> > has so many bugs :^)\n\t> > \n\t> > I did a ls -lS in /html/mhonarc/pgsql-bugs/1998-11, and this is\na list of all\n\t> > the duplicates of the biggest message:\n\t> > \n\t> > -rw-r--r-- 1 mirror mirror 157579 May 4 11:02\nmsg00027.html\n\t> > -rw-r--r-- 1 mirror mirror 157579 May 4 11:02\nmsg00399.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 24 11:01\nmsg00058.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 25 11:01\nmsg00089.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 26 11:01\nmsg00120.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 27 11:01\nmsg00151.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 28 11:01\nmsg00182.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 29 11:01\nmsg00213.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 Apr 30 11:01\nmsg00244.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 May 1 11:01\nmsg00275.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 May 2 11:01\nmsg00306.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 May 3 11:01\nmsg00337.html\n\t> > -rw-r--r-- 1 mirror mirror 157501 May 4 11:02\nmsg00368.html\n\t> > \n\t> > Please, do something ASAP, the partition I reserved for postgres\nmirror is not\n\t> > very big and it will be filled soon.... not to talk about the\n(very) expensive\n\t> > italian bandwidth I'm wasting :^)\n\t> > \n\t> > Thanks!\n\t> > \n\t> > -- \n\t> > Daniele\n\t> > \n\t> >\n----------------------------------------------------------------------------\n---\n\t> > Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n\t> > Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n\t> >\n----------------------------------------------------------------------------\n---\n\t> > \n\t> \n\t> _____________________________________________________________\n\t> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n\t> Sternberg Astronomical Institute, Moscow University (Russia)\n\t> Internet: [email protected], http://www.sai.msu.su/~megera/\n\t> phone: +007(095)939-16-83, +007(095)939-23-83\n\t> \n\t> \n\n\tMarc G. Fournier ICQ#7615664 IRC\nNick: Scrappy\n\tSystems Administrator @ hub.org \n\tprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:12:55 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Mirror mess... (urgent)"
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis.\n\n\n\tFrom:\tVadim Mikheev <vadim @ krs.ru> on 05/04/99 09:18 PM\n\tTo:\tTom Lane <tgl @ sss.pgh.pa.us>@SMTP@EXCHANGE\n\tcc:\tpgsql-hackers @ postgreSQL.org@SMTP@EXCHANGE \n\tSubject:\tRe: [HACKERS] Advice wanted on backend memory\nmanagement\n\n\tTom Lane wrote:\n\t> \n\t> I want to change hashjoin's use of a fixed-size overflow area for\ntuples\n\t> that don't fit into the hashbucket they ought to go in. Since\nit's\n\t> always possible for an improbably large number of tuples to hash\ninto the\n\t> same hashbucket, the overflow area itself can overflow; without\nthe\n\t> ability to recover from that, hashjoin is inherently unreliable.\n\t> So I think this is an important thing to fix.\n\t> \n\t> To do this, I need to be able to allocate chunks of space that I\nwill\n\t> later want to give back all at once (at the end of a hash pass).\n\t> Seems to me like a job for palloc and a special memory context ---\n\t> but I see no way in mcxt.h to create a new memory context. How do\n\t> I do that? Also, I'd want the new context to be a \"sub-context\"\nof\n\n\tNo way :(\n\tStartPortalAllocMode could help but - portalmem.c:\n\t/*\n\t * StartPortalAllocMode \n\t * Starts a new block of portal heap allocation using mode and\nlimit;\n\t * the current block is disabled until EndPortalAllocMode is\ncalled.\n\t\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\tI'm unhappy with this allocation block stacking for quite long time\n:(\n\n\tTry to pfree chunks \"by hand\".\n\n\tVadim\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:18:27 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Advice wanted on backend memory management"
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis.\n\n\n\tFrom:\tHiroshi Inoue <Inoue @ tpf.co.jp> on 05/04/99 10:17 PM\n\tTo:\tVadim Mikheev <vadim @ krs.ru>@SMTP@EXCHANGE, PostgreSQL\nDevelopers List <hackers @ postgreSQL.org>@SMTP@EXCHANGE\n\tcc:\t \n\tSubject:\tRE: [HACKERS] I'm planning some changes in lmgr...\n\n\t> -----Original Message-----\n\t> From: [email protected]\n\t> [mailto:[email protected]]On Behalf Of Vadim\nMikheev\n\t> Sent: Sunday, May 02, 1999 12:23 AM\n\t> To: PostgreSQL Developers List\n\t> Subject: [HACKERS] I'm planning some changes in lmgr...\n\t> \n\t> \n\t> but have no time to do them today and tomorrow -:(.\n\t> \n\t> 1. Add int waitMask to LOCK to speedup checking in\nLockResolveConflicts:\n\t> if lock requested conflicts with lock requested by any waiter \n\t> (and we haven't any lock on this object) -> sleep\n\t> \n\t> 2. Add int holdLock (or use prio) to PROC to let other know\n\t> what locks we hold on object (described by PROC->waitLock)\n\t> while we're waiting for lock of PROC->token type on\n\t> this object.\n\t> \n\t> I assume that holdLock & token will let us properly \n\t> and efficiently order waiters in LOCK->waitProcs queue\n\t> (if we don't hold any lock on object -> go after\n\t> all waiters with holdLock > 0, etc etc etc).\n\t> \n\t> Comments?\n\t>\n\n\tFirst, I agree to check conflicts for ( total - own ) hodling lock\nof \n\tthe target object if transaction has already hold some lock on the \n\tobject and when some conflicts are detected,the transaction \n\tshould be queued with higher priority than transactions which hold \n\tno lock on the object.\n\n\tSecondly, if a transaction holds no lock on the object, we should \n\tcheck conflicts for ( holding + waiting ) lock of the object.\n\n\tAnd I have a question as to the priority of queueing.\n\tDoes the current definition of priority mean the urgency \n\tof lock ?\n\n\tIt may prevent lock escalation in some cases.\n\tBut is it effective to avoid deadlocks ? \n\tIt's difficult for me to find such a case.\n\n\tThanks.\n\n\tHiroshi Inoue\n\[email protected]\n\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:19:09 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] I'm planning some changes in lmgr..."
},
{
"msg_contents": "\nYa know, its almost tempting to send /kernel to ppl that spam lists like\nthis :(\n\n\nOn Wed, 5 May 1999, Michael Davis wrote:\n\n> \tYour e-mail did not arrive at its intended destination. You need to\n> send it to Michael J. Davis, not Michael Davis.\n> \n> \n> \tFrom:\tHiroshi Inoue <Inoue @ tpf.co.jp> on 05/04/99 10:17 PM\n> \tTo:\tVadim Mikheev <vadim @ krs.ru>@SMTP@EXCHANGE, PostgreSQL\n> Developers List <hackers @ postgreSQL.org>@SMTP@EXCHANGE\n> \tcc:\t \n> \tSubject:\tRE: [HACKERS] I'm planning some changes in lmgr...\n> \n> \t> -----Original Message-----\n> \t> From: [email protected]\n> \t> [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> \t> Sent: Sunday, May 02, 1999 12:23 AM\n> \t> To: PostgreSQL Developers List\n> \t> Subject: [HACKERS] I'm planning some changes in lmgr...\n> \t> \n> \t> \n> \t> but have no time to do them today and tomorrow -:(.\n> \t> \n> \t> 1. Add int waitMask to LOCK to speedup checking in\n> LockResolveConflicts:\n> \t> if lock requested conflicts with lock requested by any waiter \n> \t> (and we haven't any lock on this object) -> sleep\n> \t> \n> \t> 2. Add int holdLock (or use prio) to PROC to let other know\n> \t> what locks we hold on object (described by PROC->waitLock)\n> \t> while we're waiting for lock of PROC->token type on\n> \t> this object.\n> \t> \n> \t> I assume that holdLock & token will let us properly \n> \t> and efficiently order waiters in LOCK->waitProcs queue\n> \t> (if we don't hold any lock on object -> go after\n> \t> all waiters with holdLock > 0, etc etc etc).\n> \t> \n> \t> Comments?\n> \t>\n> \n> \tFirst, I agree to check conflicts for ( total - own ) hodling lock\n> of \n> \tthe target object if transaction has already hold some lock on the \n> \tobject and when some conflicts are detected,the transaction \n> \tshould be queued with higher priority than transactions which hold \n> \tno lock on the object.\n> \n> \tSecondly, if a transaction holds no lock on the object, we should \n> \tcheck conflicts for ( holding + waiting ) lock of the object.\n> \n> \tAnd I have a question as to the priority of queueing.\n> \tDoes the current definition of priority mean the urgency \n> \tof lock ?\n> \n> \tIt may prevent lock escalation in some cases.\n> \tBut is it effective to avoid deadlocks ? \n> \tIt's difficult for me to find such a case.\n> \n> \tThanks.\n> \n> \tHiroshi Inoue\n> \[email protected]\n> \n> \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 6 May 1999 11:40:22 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] I'm planning some changes in lmgr..."
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis\n\n\n\tFrom:\tOleg Bartunov <oleg @ sai.msu.su> on 05/04/99 11:56 PM\n\tTo:\tBruce Momjian <maillist @ candle.pha.pa.us>@SMTP@EXCHANGE\n\tcc:\tTatsuo Ishii <t-ishii @ sra.co.jp>@SMTP@EXCHANGE, hackers @\npostgreSQL.org@SMTP@EXCHANGE \n\tSubject:\tRe: [HACKERS] posmaster failed under high load\n\n\tOn Tue, 4 May 1999, Bruce Momjian wrote:\n\n\t> Date: Tue, 4 May 1999 21:35:56 -0400 (EDT)\n\t> From: Bruce Momjian <[email protected]>\n\t> To: Oleg Bartunov <[email protected]>\n\t> Cc: Tatsuo Ishii <[email protected]>, [email protected]\n\t> Subject: Re: [HACKERS] posmaster failed under high load\n\t> \n\t> > My machine was very-very load during this test - I saw peak\n\t> > load about 65, a lot of swapping but test completes and system\n\t> > after 20 minutes of swapping remains usable. I still saw many\n\t> > postmasters (not postgres) processes running but after about \n\t> > 30-40 minutes they gone. Actually pstree -a now shows\n\t> > \n\t> > |-postmaster -i -B 1024 -S -D/usr/local/pgsql/data/ -o -Fe\n\t> > | |-(postmaster)\n\t> > | `-postmaster \n\t> \n\t> ps should show our process listing display change. They are\npostgres\n\t> processes, but without the exec() call we used to do, it shows\nthis way\n\t> only on OS's that don't support ps arg display changes from inside\nthe\n\t> process.\n\n\tNo, it does on Linux.\n\t 5159 ? S 0:00 postmaster -i -B 1024 -S\n-D/usr/local/pgsql/data/ -o -Fe \n\t 5168 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd\napod idle \n\t 5169 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd\napod idle \n\t 5170 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd\napod idle \n\t 5171 ? S 0:00 /usr/local/pgsql/bin/postgres localhost httpd\napod idle \n\n\tThat's why I noticed 10 or more (postmaster) processes, which\neventually\n\tgone after 30-40 minutes.\n\n\t\tOleg\n\n\t> \n\t> \n\t> \n\t> -- \n\t> Bruce Momjian |\nhttp://www.op.net/~candle\n\t> [email protected] | (610) 853-3000\n\t> + If your life is a hard drive, | 830 Blythe Avenue\n\t> + Christ can be your backup. | Drexel Hill,\nPennsylvania 19026\n\t> \n\n\t_____________________________________________________________\n\tOleg Bartunov, sci.researcher, hostmaster of AstroNet,\n\tSternberg Astronomical Institute, Moscow University (Russia)\n\tInternet: [email protected], http://www.sai.msu.su/~megera/\n\tphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:19:38 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] posmaster failed under high load"
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis\n\n\n\tFrom:\tDirk Lutzebaeck <lutzeb @ aeccom.com> on 05/05/99 03:30 AM\n\tTo:\tTom Lane <tgl @ sss.pgh.pa.us>@SMTP@EXCHANGE\n\tcc:\thackers @ postgreSQL.org@SMTP@EXCHANGE \n\tSubject:\tRe: [HACKERS] major flaw in 6.5beta1???\n(UPDATE/INSERT waiting) \n\n\tTom Lane writes:\n\t > Dirk Lutzebaeck <[email protected]> writes:\n\t > > cs=> select envelope from recipient where envelope=510349;\n\t > > [ returns a tuple that obviously fails the WHERE condition ]\n\t > \n\t > Yipes. Do you have an index on the envelope field, and if so is\n\t > it being used for this query? (Use EXPLAIN to check.) My guess\n\t > is that the index is corrupted. Dropping and recreating the\nindex\n\t > would probably set things right.\n\n\tYes, thanks, recreating the index cures the problem.\n\n\t > Of course the real issue is how it got corrupted. Hiroshi found\n\t > an important bug in btree a few days ago, and there is a\ndiscussion\n\t > going on right now about lock-manager bugs that might possibly\nallow\n\t > multiple backends to corrupt data that they're concurrently\nupdating.\n\t > But I have no idea if either of those explains your problem.\n\n\tDoes this mean they can deadlock themselves? Is this also true for\n\t6.4.2? I probably switch back then.\n\n\tThanks, Dirk\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:20:40 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] major flaw in 6.5beta1??? (UPDATE/INSERT waiting)"
}
] |
[
{
"msg_contents": "\tYour e-mail did not arrive at its intended destination. You need to\nsend it to Michael J. Davis, not Michael Davis\n\n\n\tFrom:\tDmitry Samersoff <dms @ wplus.net> on 05/05/99 08:50 AM\n\tTo:\tpgsql-hackers @ postgreSQL.org@SMTP@EXCHANGE\n\tcc:\t \n\tSubject:\t[HACKERS] Small improvement\n\n\twww2(root)/usr/local/pgsql/src/src/backend/port/dynloader>\n\tdiff freebsd.c freebsd.c.orig \n\t71c71\n\t< sprintf(error_message, \"dlopen '%s' failed. (%s)\", file,\ndlerror() );\n\t---\n\t> sprintf(error_message, \"dlopen (%s) failed\", file);\n\n\n\n\t---\n\tDmitry Samersoff, [email protected], ICQ:3161705\n\thttp://devnull.wplus.net\n\t* there will come soft rains ...\n\n\n\n",
"msg_date": "Wed, 5 May 1999 09:21:00 -0500 ",
"msg_from": "Michael Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Small improvement"
}
] |
[
{
"msg_contents": "Hi,\n\nI hate to resort to posting here but I got no responses\nin the other groups. I am following up on an earlier\nresponse on promotion of float4 to float8 in the WHERE\nclause.\n\nTo get around this, Tom Lockhart suggested that I make a \nfunction index on float8, but this is what happens:\n\nfinal99=> create index mx on psc using btree (float8(glat) float8_ops);\nERROR: internal error: untrusted function not supported.\n\n[\"psc\" is my table and \"glat\" is a float4].\n\nAny ideas how to do this? I have successfully made an index\non an externally linked function . . . \n\nThanks,\n\n--Martin\n\n===========================================================================\n\nMartin Weinberg Phone: (413) 545-3821\nDept. of Physics and Astronomy FAX: (413) 545-2117/0648\n530 Graduate Research Tower\nUniversity of Massachusetts\nAmherst, MA 01003-4525\n\n\n",
"msg_date": "Wed, 05 May 1999 11:22:17 -0300",
"msg_from": "Martin Weinberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with function indexing"
},
{
"msg_contents": "Martin Weinberg <[email protected]> writes:\n> final99=> create index mx on psc using btree (float8(glat) float8_ops);\n> ERROR: internal error: untrusted function not supported.\n\nThe trouble here is that in 6.4.*, float4-to-float8 is an SQL alias\nfunction, and you can't use an SQL function as the guts of an index.\n(I know, the error message is misleading.)\n\nLooking in pg_proc shows that the underlying built-in function is named\n\"ftod\":\n\nplay=> select proname,prosrc from pg_proc where proname = 'float8' and\nplay-> pg_proc.proargtypes[0] = 700;\nproname|prosrc\n-------+---------------\nfloat8 |select ftod($1)\n(1 row)\n\nso if you say\n\tcreate index mx on psc using btree (ftod(glat) float8_ops);\nit should work.\n\n(BTW, in 6.5 this little fine point goes away, since all the aliases of\na built-in function are equally built-in.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 05 May 1999 19:32:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Problem with function indexing "
}
] |
[
{
"msg_contents": "I seem to have several email addresses subscribed to this list. I have\ntried 4 times to have these extra email addresses removed without success.\nWould the owner of this list please unsubscribe or remove the following\nemail addresses from this list. \n\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\n\nThis is a real problem form because one of these email addresses belongs to\nan other individual in my company and he gets lots of unwanted emails. Your\nimmediate attention is greatly appreciated.\n\nThanks, Michael\n\n",
"msg_date": "Wed, 5 May 1999 11:25:17 -0500 ",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "unsubscribe problems"
}
] |
[
{
"msg_contents": "This is a suggestion that came back from the java-linux mailing list.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n---------- Forwarded message ----------\nDate: Tue, 04 May 1999 00:21:57 -0500\nFrom: Chris Abbey <[email protected]>\nTo: Peter T Mount <[email protected]>\nCc: Java Linux Mailing List <[email protected]>\nSubject: Re: SIGBUS in AllocSetAlloc & jdbc (fwd)\n\nThis isn't a fix, but it'll get you around the problem for\nnow... I kid you not, it works with some of the code I run\nhere where people did the same switch logic around rmi. -=Chris\n\njava -Djava.version=1.1.7 your.class.here\n\n o o\n\\___/\n\n\nAt 12:05 PM 5/3/99 +0100, Peter T Mount wrote:\n>\n>[ I'm cc'ing this to java-linux as this seems to be a problem with the\n>Linux PPC port - peter ]\n>\n>On Sun, 2 May 1999, Tatsuo Ishii wrote:\n>\n>[snip]\n>\n>> This morning I started to look into this. First, JDBC driver coming\n>> with 6.5b did not compile. The reason was my JDK (JDK 1.1.7 v1 on\n>> LinuxPPC) returns version string as \"root:10/14/98-13:50\" and\n>> makeVersion expected it started with \"1.1\". This was easy to fix. So I\n>> went on and tried the ImageViewer sample. It gave me SQL an exception:\n>\n>[snip]\n>\n>> P.S. Peter, do you have any suggestion to make JDBC driver under JDK\n>> 1.1.7?\n>\n>Ah, the first problem I've seen with the JVM version detection. the\n>postgresql.Driver class does the same thing as makeVersion, and checks the\n>version string, and when it sees that it starts with 1.1 it sets the base\n>package to postgresql.j1 otherwise it sets it to postgresql.j2.\n>\n>The exceptions you are seeing is the JVM complaining it cannot find the\n>JDK1.2 classes.\n>\n>As how to fix this, this is tricky. It seems that the version string isn't\n>that helpful. The JDK documentation says it returns the version of the\n>JVM, but there seems to be no set format for this. ie, with your version,\n>it seems to give a date and time that VM was built.\n>\n>Java-Linux: Is there a way to ensure that the version string is similar to\n>the ones that Sun produces? At least having the JVM version first, then\n>reform after that?\n>\n>The PostgreSQL JDBC driver is developed and tested under Linux (intel)\n>using 1.1.6 and 1.2b1 JVM's (both blackdown). I use Sun's Win32 1.2 JVM\n>for testing. The current driver works fine on all three JVM's, so it seems\n>to be the PPC port that has this problem.\n>\n>Peter\n>\n>-- \n> Peter T Mount [email protected]\n> Main Homepage: http://www.retep.org.uk\n>PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n> Java PDF Generator: http://www.retep.org.uk/pdf\n>\n>\n>\n>----------------------------------------------------------------------\n>To UNSUBSCRIBE, email to [email protected]\n>with a subject of \"unsubscribe\". Trouble? Contact [email protected]\n>\n>\n\n!NEW!-=> <*> cabbey at home dot net http://members.home.net/cabbey/ <*>\n\"What can Microsoft do? They certainly can't program around us.\" - Linus\n\n-----BEGIN GEEK CODE BLOCK----- Version:3.12 http://www.geekcode.com\nGCS$/IT/PA$ d(-) s++:+ a-- C+++$ UL++++ UA++$ P++ L++ E- W++ N+ o? K? !P\nw---(+)$ O- M-- V-- Y+ PGP+ t--- 5++ X+ R tv b+ DI+++ D G e++ h(+) r@ y?\n------END GEEK CODE BLOCK------\n\n",
"msg_date": "Wed, 5 May 1999 19:28:36 +0100 (GMT)",
"msg_from": "Peter T Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SIGBUS in AllocSetAlloc & jdbc (fwd)"
}
] |
[
{
"msg_contents": "Try:\n\n\t#include <pgsql/postgres.h>\n\n\tint plus2(int a,int b)\n\t{\n\treturn(a+b);\n\t}\n\n\n\tcc -c plus2.c /* I get the object file *.o */\ngcc -shared -o plus2.so plus2.o /* you need a library file */\n\nfrom psql: \n\n\tcreate function plus2(int4,int4) returns int4 as\n\t'/var/lib/pgsql/plus2.so' language 'c'; -- assuming /var/lib/pgsql\nis the location of plus2.so\n\n\t-----Original Message-----\n\tFrom:\tCarlos Peralta Ramirez [SMTP:[email protected]]\n\tSent:\tWednesday, May 05, 1999 2:44 AM\n\tTo:\tpgsql-novice; pgsql-hackers; pgsql-sql; pgsql-general\n\tSubject:\t[GENERAL] which guru know this ??\n\n\n\n\tPrimary, Sorry for my english !!!\n\n\n\tHi,\n\n\tI �m developing a several test database, for study. But I can�t\ncreate\n\tany function, every time ERROR appears.\n\tI write a C code in a plus2.c file\n\n\t#include <pgsql/postgres.h>\n\n\tint4 plus2(int4 a,int4 b)\n\t{\n\treturn(a+b);\n\t}\n\n\tmain() /* For getting *.out*/\n\t{\n\t}\n\n\t, then I compile it (I made this of two ways)\n\n\tcc -c plus2.c /* I get the object file *.o */\n\n\tor\n\n\tcc plus2.c plus2.out /*I get the *.out file */\n\n\tWhen I create the function on postgresql, occurs this\n\n\tdb=> create function plus2(int4,int4) returns int4 as\n\t'/var/lib/pgsql/plus2.o' language 'c';\n\tCREATE\n\n\tor\n\n\tdb=> create function plus2(int4,int4) returns int4 as\n\t'/var/lib/pgsql/plus2.out' language 'c';\n\tCREATE\n\n\tWherever, when I call the function plus2 from Sql sentence ...\n\n\tdb=> select plus2(6,6);\n\tERROR: Load of file /var/lib/pgsql/plus2.o failed:\n\t(�@(�@/pgsql/plus2.o: ELF file's phentsize not the expected size\n\n\tor, with plus2.out\n\n\tdb=> select plus2(6,6);\n\tPQexec() -- Request was sent to backend, but backend closed the\nchannel\n\tbefore responding.\n\tThis probably means the backend terminated abnormally before or\nwhile\n\tprocessing the request.\n\n\tIf anybody can help me, thank !!!\n\n\tPD: I don�t disturb you, if I don�t need it really.\n\n\n\t\n",
"msg_date": "Wed, 5 May 1999 14:52:25 -0500 ",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [GENERAL] which guru know this ??"
}
] |
[
{
"msg_contents": "Hi,\n\nwe are using Postgresql 6.4.2 on FreeBSD 2.2.8 and have a lot of\nproblems with suddenly dying backend processes. We have already\nchanged kernel parameters to get more shared memory and start the\npostmaster with -B 1024. The postmaster's virtual memory limit is\ncurrently 300MByte and our largest tables contain about 12000\nrecords. vacuum runs nightly and reports no errors. The problem\narises mostly after a couple of INSERT or SELECT INTO statements,\nbut also a 'COPY mytable FROM stdin' fails when I try to load\na file with 13000 datasets. It works when I split it up into\nseveral files of not more than 5000 datasets each and load them\nseparately.\n\nTurning on debugging for the postmaster as well as the backends\nis not very helpful for us because the server seems to behave\ndifferently. For Example, with debugging turned on we can't\ncreate a certain view, which works without the debugging options.\nAnyway, debug level 3 produced the following message:\n\npostmaster: reaping dead processes...\npostmaster: CleanupProc: pid 15608 exited with status 139\npostmaster: CleanupProc: reinitializing shared memory and semaphores\nshmem_exit(0) [#0]\nbinding ShmemCreate(key=52e389, size=8852184)\n\nDoes anyone know what 'status 139' means or where we can find it\nin the source?\nAny further ideas how we can track down the cause of our dying\nbackends?\n\nThanks in advance\n\nMirko\n\n\n\n",
"msg_date": "Wed, 5 May 1999 22:11:01 +0200 (MET DST)",
"msg_from": "postgres admin <[email protected]>",
"msg_from_op": true,
"msg_subject": "dying backend processes"
}
] |
[
{
"msg_contents": "> Date: Tue, 4 May 1999 11:33:34 +0200 (CEST)\n> From: Dirk Lutzebaeck <[email protected]>\n> Subject: INSERT/UPDATE waiting\n> \n> Hello,\n> \n> somehow the backend is hanging on my system (6.5beta1, Linux 2.2.6):\n> \n> postgres 29957 0.0 1.5 4124 2048 ? S 18:23 0:00 /usr/local/pgsql/bin/postgres localhost lutzeb cs UPDATE waiting \n> postgres 29980 0.0 1.6 4124 2064 ? S 18:25 0:00 /usr/local/pgsql/bin/postgres localhost lutzeb cs UPDATE waiting \n> postgres 30005 0.0 1.6 4124 2088 ? S 18:27 0:00 /usr/local/pgsql/bin/postgres localhost lutzeb cs UPDATE waiting \n> postgres 30012 0.0 2.1 4532 2696 ? S 18:28 0:00 /usr/local/pgsql/bin/postgres localhost lutzeb cs INSERT waiting \n> postgres 30030 0.0 3.0 5780 3916 ? S 18:28 0:00 /usr/local/pgsql/bin/postgres localhost lutzeb cs idle \n> \n> [about 34 processes]\n> \n> What is happening here? Can't find anything in the documentation.\n\nHi everyone,\n\nWe just deployed a large system a few weeks ago which involves using\nPostgreSQL 6.4.2 and CGI based interfaces, and we usually have around\n10-20 connections always running, the system is very busy during 9-5\nhours. The machine is running FreeBSD 2.2.7 and has 256 mb of RAM,\nlots of file descriptors.\n\nWe are experiencing exactly the same problem as above - during the day,\nall of a sudden Postgres will completely jam up, with all processing in\none of the following states: (from ps -axwwwwww)\n\nSELECT waiting\nDELETE waiting\nINSERT waiting\nidle waiting\nidle\n\nSometimes the ps output will also return postgres backends with garbage\nhigh-ascii characters in their name, got no idea why here either....\n\nOriginally, we thought it was a bad query somewhere with lock statements\nin the wrong order, causing the deadlock, and so we reduced one of the\nconstants compiled into the backend which controls how often deadlocks\nwere checked for and set it to something like ten seconds.\n\nI then forced the database to go into a real deadlock by doing:\n\n1: BEGIN\n2: BEGIN\n1: LOCK table a\n2: LOCK table b\n1: LOCK table b\n2: LOCK table a\n\nTen seconds later, one is aborted due to the deadlock checking - this is\ngreat, as everything clears up and continues no problems.\n\nHowever, during operation, every so often for no apparent reason, all the\nbackends doing work all jam up with \"waiting\" in their name, otherwise\nthey are idle - and nothing happens. I let it sit for about 5 minutes one\ntime, and nothing happened. I had to kill everything so the staff could\nactually use the system.\n\nFor a while, we went hunting through our code looking for improperly\nordered lock statements and things like that. We found one which was fixed\nbut the problem still happens - there may be more of these in the code,\nthere probably are in fact, but I'm under the impression the deadlock\nchecking is supposed to get around that and kill backends nicely to\nresolve the conflict?\n\nAlso, one other thing we discovered was making the following deadlock:\n\nS1: BEGIN\nS1: LOCK table a\nS2: SELECT * FROM a, b where a.id = b.id\nS1: LOCK table b\n\nIf we did explain on the SELECT, and it chose to scan A first, it would work\nbut if we used an index or rearranged the select statement, and B was scanned first\nwe would get a deadlock, since the select couldn't complete. Now I would have\nhoped that locks for the join would be acquired either all at once, or none\nat all. I don't want to have to wrap lock statements around the select because\nall I want to do is read some data, I don't want to make updates!\n\n\n\n\nAnyways, back to the real problem, I enabled a constant in the code and\nwhenever the deadlock checking occurs during this lockup problem, and it\nprints the following out to the log file: \n\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\n\nNot sure what this means - I'd really like it to show what kind of locks\nthe postgres processes are waiting for, as it could help me resolve the\nproblem. Is there anything else I can compile in to do this? I notice\nthere are quite a few symbols you can define for locks.\n\n\nSo right now I am at a loss on what to do - whenever everything jams up, I\ndo a ps -axwwww and kill off all postgres processes which are idle\nwaiting, then SELECT waiting, and sometimes it clears up. Failing that\nI then kill the ones making changes like delete, insert, update - although\nI hate doing that because I found that when I went around ruthlessly\nkilling backends to resolve conflicts, tables got corrupted and we\nexperienced problems like BTP_CHAIN, and vacuums would start failing,\nrequiring us to dump the tables and reload, which was pretty bad :(\n\nI've had other problems as well, but I'll save them for another email.\n\nSo if anyone can offer any advice, I'd be eternally grateful! Right now\nI'm getting lots of flak from people, saying I should have used Oracle, or\nmSQL, etc, but Postgres works great and allows me to embed code into the\nbackend, and all kinds of other cool features. I really am looking forward\nto 6.5 as the MVCC stuff sounds great, but right now I need to get this\nworking reliably until then.\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n",
"msg_date": "Thu, 6 May 1999 13:01:36 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INSERT/UPDATE waiting (another example)"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> We are experiencing exactly the same problem as above - during the day,\n> all of a sudden Postgres will completely jam up, with all processing in\n> one of the following states: (from ps -axwwwwww)\n\nIt seems possible that the hashtable bugs I fixed a couple months ago\nare rising up to bite you. (Basically, the shared hashtables that\ncontain things like locks and buffers would go nuts if there got to be\nmore than 256 entries ... and it sure sounds like your installation is\nbig enough that it could have, eg, more than 256 active locks when\nunder load.) One quick thing you might try to test this is to reduce\nthe postmaster's -B setting to less than 256 (if you have it set that\nhigh) and see if stability improves.\n\nThese bugs are fixed in 6.5-beta1, but it has enough other bugs that\nI don't think Wayne would be wise to try moving to 6.5 just yet.\nI have a patch for 6.4.2 that I believe also fixes the problems, but\nit hasn't gotten quite as much testing as I would like so I haven't\ncommitted it into the REL6_4 tree. (There's not going to be a 6.4.3\nrelease, according to current plans, so it's hardly worth doing anyway.)\n\nWhat I will do is send the patch to Wayne in a separate message, and\nalso cc: it to the PATCHES list --- anyone else who needs it can get it\nfrom there. Please let us know if this helps, Wayne.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 May 1999 10:04:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: INSERT/UPDATE waiting (another example) "
},
{
"msg_contents": "Tom Lane wrote:\n> Wayne Piekarski <[email protected]> writes:\n> > We are experiencing exactly the same problem as above - during the day,\n> > all of a sudden Postgres will completely jam up, with all processing in\n> > one of the following states: (from ps -axwwwwww)\n> \n> It seems possible that the hashtable bugs I fixed a couple months ago\n> are rising up to bite you. (Basically, the shared hashtables that\n> contain things like locks and buffers would go nuts if there got to be\n> more than 256 entries ... and it sure sounds like your installation is\n> big enough that it could have, eg, more than 256 active locks when\n> under load.) One quick thing you might try to test this is to reduce\n> the postmaster's -B setting to less than 256 (if you have it set that\n> high) and see if stability improves.\n\nCurrently, I start up postmaster with -B 192, which I guess puts it below\nthe value of 256 which causes problems. Apart from when I got past 256\nbuffers, does the patch fix anything else that might be causing problems?\n\nJust for everyones information, the system contains about 80 tables and\n129 indexes. There is about 700 mb of data sprayed over all the tables,\nalthough some have more rows than others. At any one time during the day,\nwe have about 8 to 10 active postgres connections, half of them are\nconnected to daemons which continuously sent updates and inserts into the\nsystem, the rest of them are very quick queries from CGI programs. The\nproblems we experience are always during the day, when the CGI programs\nare hammering the database - we don't ever have a problem at night when\nthe staff go home. \n\nThe whole thing runs 24 hours a day, 7 days a week. Most of the tables\nrarely get vacuumed (they have tens of thousands of rows and only inserts\nget done to them - the optimiser makes good choices for most of these) -\nhowever we have 5 tables which get vacuum at midnight each day, we drop\nall the indexes, vacuum, then recreate. If we don't do the index thing,\nthe vacuum can take tens of minutes, which is not acceptable - the tables\ncontain about 20000 rows, each of which gets updated about 3 times during \nthe day. I sent an email a while back about vacuum performance, and this\nhack is the only way around it.\n\nIf any other programs try to query the four tables getting vacuumed then I\nget into real trouble. I wish I could do soemthing like:\n\nBEGIN;\nLOCK TABLE x;\nDROP INDEX x_idx;\nVACUUM ANALYZE x;\nCREATE INDEX x_idx;\nEND;\n\nI've seen a #define which looked like it enabled this kind of thing, but\nI'm not sure if it is safe to use.\n\n\n> What I will do is send the patch to Wayne in a separate message, and\n> also cc: it to the PATCHES list --- anyone else who needs it can get it\n> from there. Please let us know if this helps, Wayne.\n\nDuring the week when I get a chance I will trial the patch and see if it\nhas any affect on the problems we are having. It is very wierd and\nimpossible to reproduce on demand as it is related to the number of\nqueries and the load of the machine at the time.\n\nHopefully I will have some results for this by the end of the week.\n\n\n\nWhile I'm asking some questions here, I should tell you about some of the\nother wierd things I've encountered, many of them are related to shared\nmemory and hash tables, which is making me think more and more that all\nthe problems I am having are somehow related.\n\nFor large tables, when I perform joins, I repeatedly get hash table out of\nmemory errors. So I have two tables, one called unix, with 20000 rows, and\nanother called services, with 80000 rows - I am producing a result which\ncontains about 20000 rows in it as well, so there is lots of data moving\naround.\n\nIn most cases, the problem occurs when the optimiser mistakenly choses to\nuse seq scan rather than index scan. To get around these problems, we\ninitially tried increasing the -B value to larger values (This was a long\ntime ago but we had problems, it may have been more than 256 which fits in\nwith what Tom Lane said). Every time we kept increasing the number of\nbuffers but it got to the point where I was annoyed that the optimiser was\nmaking bad decisions, and I was at a loss on what to do. So I then\ndiscovered the COST_INDEX and COST_HEAP variables, which I set to:\n\nset COST_INDEX = '0'; set COST_HEAP = '99999999';\n\nThe optimiser then used index scan for almost anything where possible, the\nexplain output looked really expensive, but the queries actually executed\nproperly even with small -B values. So this is what I do to make these big\nqueries work. There are a few cases where the above set statements\nactually cause hash table out of memory as well, so you set them back to\nthe defaults and then it usually works ok :)\n\nI know the above is a hack but I needed to get out of a jam and that was\nthe only way I could think of doing it. Are there any other join methods\nbesides hash join? I thought that lets say I have two tables, A and B,\nboth with a column called ID which is indexed, and i do a join on A.id and\nB.id it can use a more efficient means of joining using indexes rather\nthan reading both tables into memory and join there?\n\nHere are some explain statements for a big join:\n\n\nreactor=> explain select unix.username from unix where unix.snum =\nservices.snum\nNOTICE: QUERY PLAN:\n\nHash Join (cost=6191.62 size=43361 width=20)\n -> Seq Scan on services (cost=2204.91 size=43361 width=4)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on unix (cost=1212.26 size=20311 width=16)\n\n\n\nreactor=> set COST_INDEX = '0';\nSET VARIABLE\nreactor=> set COST_HEAP = '999999999';\nSET VARIABLE\nreactor=> explain select unix.username from unix where unix.snum =\nservices.snum;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=30000000.00 size=43361 width=20)\n -> Index Scan using unix_snum_inv_index on unix\n(cost=20311001006080.00 size=20311 width=16)\n -> Hash (cost=0.00 size=0 width=0)\n -> Index Scan using services_snum_inv_index on services\n(cost=43360999964672.00 size=43361 width=4)\n \n\nI would assume that the above one which uses indexes would be a lot\nbetter, but why did the optimiser chose the seq scan - do the indexes help\nwhen doing joins and at the same time all rows are being returned back? I\nunderstand that the optimiser will choose not to use indexes if it feels\nthat it will return most of the rows anyway and so a seq scan is better.\n\n\n------\n\n\nOne other problem related to the shared memory buffers is every so often,\nthe postmaster will die with shared memory errors, and device full. This\nhappens very rarely (once every one to two weeks) but it happens, and I\nfigured that it might be related to the number of buffers I've started up\nwith. Note that this problem is not varied by changing the -B value, so I\ndon't think its my FreeBSD setup.\n\n\n\n\nSo I hope someone finds the above useful, I've been reading the mailing\nlists a lot and I've heard about developers discovering bugs in locking,\nindexes, and vacuum in 6.5, but I wasn't sure if they were applicable to\n6.4.2 as well, so I figured I should tell someone just in case.\n\n\nSorry about the length of this email, but I had a lot of things to cover. \nThanks for your help everyone, I look forward to hearing from you ...\n\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sun, 9 May 1999 17:38:42 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: INSERT/UPDATE waiting (another example)"
},
{
"msg_contents": "> The whole thing runs 24 hours a day, 7 days a week. Most of the tables\n> rarely get vacuumed (they have tens of thousands of rows and only inserts\n> get done to them - the optimiser makes good choices for most of these) -\n> however we have 5 tables which get vacuum at midnight each day, we drop\n> all the indexes, vacuum, then recreate. If we don't do the index thing,\n> the vacuum can take tens of minutes, which is not acceptable - the tables\n> contain about 20000 rows, each of which gets updated about 3 times during \n> the day. I sent an email a while back about vacuum performance, and this\n> hack is the only way around it.\n\n6.5 beta speeds up vacuuming with existing indexes, thanks to Vadim.\nAlso, accessing during vacuuming may be better too.\n\n> While I'm asking some questions here, I should tell you about some of the\n> other wierd things I've encountered, many of them are related to shared\n> memory and hash tables, which is making me think more and more that all\n> the problems I am having are somehow related.\n\n6.5 beta has some _major_ hash fixes. We always knew there were hash\nproblems, but now Tom has fixed many of them.\n\n> I would assume that the above one which uses indexes would be a lot\n> better, but why did the optimiser chose the seq scan - do the indexes help\n> when doing joins and at the same time all rows are being returned back? I\n> understand that the optimiser will choose not to use indexes if it feels\n> that it will return most of the rows anyway and so a seq scan is better.\n\n6.5 beta also has a faster and smarter optimizer.\n\nIt may be wise for you to test 6.5beta to see how many problems we fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 9 May 1999 07:14:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Re: [HACKERS] Re: INSERT/UPDATE waiting (another example)"
},
{
"msg_contents": "Wayne Piekarski <[email protected]> writes:\n> Currently, I start up postmaster with -B 192, which I guess puts it below\n> the value of 256 which causes problems. Apart from when I got past 256\n> buffers, does the patch fix anything else that might be causing problems?\n\nYes: if you have more than 256 active locks then you also will have\nproblems with the unpatched code. I don't know how to relate that to\nany easily-measured external information, but since you have a bunch of\nconcurrently running backends it seems possible that you are running\ninto locktable problems. Anyway I do urge you to apply the patch and\nsee if things get better.\n\n> however we have 5 tables which get vacuum at midnight each day, we drop\n> all the indexes, vacuum, then recreate. If we don't do the index thing,\n> the vacuum can take tens of minutes, which is not acceptable\n\nYup, it takes an unreasonable amount of time to vacuum an index for a\ntable in which a lot of rows have been deleted. The drop/recreate hack\nis a good workaround for now. (Has anyone looked into why vacuum is so\nslow in this case?)\n\n> For large tables, when I perform joins, I repeatedly get hash table out of\n> memory errors. So I have two tables, one called unix, with 20000 rows, and\n> another called services, with 80000 rows - I am producing a result which\n> contains about 20000 rows in it as well, so there is lots of data moving\n> around.\n\nYes, the hashtable code needs work. As a short-term workaround, you\nmight try disabling hashjoins entirely, which you can do by passing\nthe debug option \"-fh\" (\"forbid hash\"). For example,\n\tsetenv PGOPTIONS \"-fh\"\n\tpsql dbase\nThe optimizer will then usually choose mergejoins, which are reasonable\nin performance if you have indexes on the columns being joined by.\n(There ought to be a SET variable that controls this, but there isn't.)\n\n> In most cases, the problem occurs when the optimiser mistakenly choses to\n> use seq scan rather than index scan. To get around these problems, we\n> initially tried increasing the -B value to larger values \n\n-B doesn't have any direct influence on the optimizer's choices, AFAIR.\n\n> set COST_INDEX = '0'; set COST_HEAP = '99999999';\n\n> The optimiser then used index scan for almost anything where possible, the\n> explain output looked really expensive, but the queries actually executed\n> properly even with small -B values. So this is what I do to make these big\n> queries work. There are a few cases where the above set statements\n> actually cause hash table out of memory as well, so you set them back to\n> the defaults and then it usually works ok :)\n\nAgain, that doesn't directly prevent the optimizer from using hash\njoins, it just skews the cost estimates so that index scans will be used\nin preference to sequential scans, whereupon you get silly plans like\nthe one you quoted:\n\n> Hash Join (cost=30000000.00 size=43361 width=20)\n> -> Index Scan using unix_snum_inv_index on unix\n> (cost=20311001006080.00 size=20311 width=16)\n> -> Hash (cost=0.00 size=0 width=0)\n> -> Index Scan using services_snum_inv_index on services\n> (cost=43360999964672.00 size=43361 width=4)\n\nThis is silly because hash join doesn't care whether its inputs are\nsorted or not --- the extra cost of scanning the index is just being\nwasted here. (Unless you have WHERE conditions that can be combined\nwith the index to allow not scanning the whole table. From the\nsize estimates it looks like that might be happening for services,\nbut not for the unix table, so the index scan on unix is definitely\na waste of time.)\n\nUsing an index scan *is* a good idea for merge join, on the other hand,\nbecause merge join requires sorted inputs. (So if you don't have a\nsuitable index, the plan will have to include an explicit sort step\nbefore the join.)\n\n> One other problem related to the shared memory buffers is every so often,\n> the postmaster will die with shared memory errors, and device full.\n\nEr, could you quote the exact error reports? This is too vague to allow\nany conclusions.\n\n> Quick question: when you do a query with a join why does the hash table\n> code need to use shared memory? Can't it do the join within its own memory\n> space?\n\nIt doesn't use shared memory. It's just that for historical reasons,\nthe amount of private memory allocated for a hashjoin table is the same\nas the amount of shared memory allocated for buffers (ie, the -B\nswitch). I've been thinking of changing it to be driven by the -S\nswitch instead, since that seems to make more sense.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 12:58:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: INSERT/UPDATE waiting (another example) "
},
{
"msg_contents": "Hi,\n\n> 6.5 beta speeds up vacuuming with existing indexes, thanks to Vadim.\n> Also, accessing during vacuuming may be better too.\n\nThat is good news :) When I first heard about MVCC I remember someone\nsuggested it would be possible to still do SELECT on tables being\nvacuumed, is this right or not in the current 6.5?\n\nWhen we were developing the system we spent a lot of time working out\nways of getting around vacuum, and I've learned a lot from it. I am going\nto try it out on a full dump of our current database and test some\nexamples to see what kind of improvement there is. \n\n> 6.5 beta also has a faster and smarter optimizer.\n> \n> It may be wise for you to test 6.5beta to see how many problems we fix.\n\nThis week I intend to test out the patches I've received, and hopefully\nthey will fix up my big problems (the one with the backend locking up)\nthen I will grab the latest 6.5 and try that out with some test data to\nsee what happens.\n\nUnfortunately, I can't test 6.5 like I would the real thing because many\nof my problems only occur when everyone is busy firing off queries and the\nbox is running an unusually high load and things start waiting on locks.\nI'll see what I can do here although the only true way is to go live with\nit - but I'm not ready for that yet :)\n\nI should be able to check the optimiser improvements though, I've got a\nlot of code which does the SET COST_HEAP/COST_INDEX hack to make things\nwork :)\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Mon, 10 May 1999 20:07:47 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] Re: [HACKERS] Re: INSERT/UPDATE waiting (another example)"
},
{
"msg_contents": "Hi everyone!\n\n\nTom Lane Writes:\n\n> Wayne Piekarski <[email protected]> writes:\n> > Currently, I start up postmaster with -B 192, which I guess puts it below\n> > the value of 256 which causes problems. Apart from when I got past 256\n> > buffers, does the patch fix anything else that might be causing problems?\n> \n> Yes: if you have more than 256 active locks then you also will have\n> problems with the unpatched code. I don't know how to relate that to\n> any easily-measured external information, but since you have a bunch of\n> concurrently running backends it seems possible that you are running\n> into locktable problems. Anyway I do urge you to apply the patch and\n> see if things get better.\n\nOk, that is cool. I will try the patch out in the next few days and get\nback with my results. I'll try to stress the machine by bogging it down a\nbit with I/O, to see if I can cause the problem to happen on demand, so I\nshould hopefully be able to give you a definite answer on if it is the\nsolution or not.\n\n> > however we have 5 tables which get vacuum at midnight each day, we drop\n> > all the indexes, vacuum, then recreate. If we don't do the index thing,\n> > the vacuum can take tens of minutes, which is not acceptable\n> \n> Yup, it takes an unreasonable amount of time to vacuum an index for a\n> table in which a lot of rows have been deleted. The drop/recreate hack\n> is a good workaround for now. (Has anyone looked into why vacuum is so\n> slow in this case?)\n\nI don't understand the source code, but I've done some experiments and\nfound out the following (with 6.4.2 - no comment about 6.5):\n\nI have this table where each row gets updated about 2-3 times. So my\npicture of the disk file is this massive file with 80000 rows, of which\nonly 20000 of them are the real ones, the rest of them being holes that\ncan't get used. This takes *ages* to vacuum (with indexes that is) - BTW,\nmy impression of ages is a couple of minutes, so this isn't taking an hour\nto run. However, I do have 5 tables to vacuum, so it ends up taking near\n10 minutes! \n\nThe performance of vacuum was a real worry as sometimes things were\ntaking ages, so I added some debugging code to\nsrc/backend/commands/vacuum.c to do the following: \n\n* I had problems with the elapsed times from getrusage() not truly\nindicating how long the vacuum took so I added calls to time(NULL) to\nproduce stats for real life seconds.\n\n* I wanted to know where it was spending most of its time, so I added elog\nstatements each time I entered or left the scan, index, etc functions in\nvacuum.c as well. I used this to work out where I was losing all my time.\n\nHere is the output:\n\nDEBUG: ===> VACUUM non-verbose analyze test_table STARTS <===\nDEBUG: Table test_table - Vacuum analyze begins\nDEBUG: Table test_table - Initial stats complete at 0 secs\nDEBUG: Table test_table - vc_scanheap starts\nDEBUG: Table test_table - vc_scanheap ends - Pages 791: Changed 82, Reapped 787, Empty 0, New 0; Tup 20111: Vac 64425, Crash 0,\nUnUsed 0, MinLen 72, Max\nLen 72; Re-using: Free/Avail. Space 4679524/4679524; EndEmpty/Avail. Pages 0/787. Elapsed 0/0 sec. Real 1 sec.\nDEBUG: Table test_table - Heap scan complete at 1 secs\nDEBUG: Index test_table_seid_index: vc_vaconeind begins\nDEBUG: Index test_table_seid_index: vc_vaconeind ends - Pages 252; Tuples 20111: Deleted 64425. Elapsed 0/26 sec. Real 37 sec.\nDEBUG: Index test_table_id_index: vc_vaconeind begins\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: Index test_table_id_index: vc_vaconeind ends - Pages 252; Tuples 20111: Deleted 64425. Elapsed 0/27 sec. Real 31 sec.\nDEBUG: Index test_table_bid_seid_index: vc_vaconeind begins\nDEBUG: Index test_table_bid_seid_index: vc_vaconeind ends - Pages 332; Tuples 20111: Deleted 64425. Elapsed 0/2 sec. Real 5 sec.\nDEBUG: Table test_table - Index clean/scan complete at 74 secs\nDEBUG: Rel test_table: Pages: 791 --> 188; Tuple(s) moved: 19655. Elapsed 2/5 sec. Real 36 sec.\nDEBUG: Index test_table_seid_index: vc_vaconeind begins\nDEBUG: Index test_table_seid_index: vc_vaconeind ends - Pages 253; Tuples 20111: Deleted 19655. Elapsed 0/1 sec. Real 1 sec.\nDEBUG: Index test_table_id_index: vc_vaconeind begins\nDEBUG: Index test_table_id_index: vc_vaconeind ends - Pages 252; Tuples 20111: Deleted 19655. Elapsed 0/0 sec. Real 1 sec.\nDEBUG: Index test_table_bid_seid_index: vc_vaconeind begins\nDEBUG: Index test_table_bid_seid_index: vc_vaconeind ends - Pages 332; Tuples 20111: Deleted 19655. Elapsed 0/1 sec. Real 1 sec.\nDEBUG: Table test_table - Processing complete at 114 secs\nDEBUG: ===> VACUUM non-verbose analyze test_table COMPLETE (115 sec) <===\n\nSo vc_vaconeind is taking a while to run, but the wierd part is when\nprocessing test_table_bid_seid_index, it took only 5 seconds compared to\ntest_table_seid_index or test_table_id_index. These indexes are all normal\nones except for id_index, which is a unique index.\n\nSo I don't know why they take differing amounts of time, I don't think its\na caching thing, because it would have affected the second call for\ntest_table_id_index as well. I've seen cases of where the first index\nvaconeind is fast and the remaining ones are slower. \n\nThe wierd part for me is that it processes each index one by one, then the\nrelation, and then it is done. I would have thought it did everything row\nby row, updating the index as required.\n\nHmmm, this is way beyond me, I thought the above might be useful for\nsomeone, although you already may know this so please ignore it then.\n \n\n> Yes, the hashtable code needs work. As a short-term workaround, you\n> might try disabling hashjoins entirely, which you can do by passing\n> the debug option \"-fh\" (\"forbid hash\"). For example,\n> \tsetenv PGOPTIONS \"-fh\"\n> \tpsql dbase\n> The optimizer will then usually choose mergejoins, which are reasonable\n> in performance if you have indexes on the columns being joined by.\n> (There ought to be a SET variable that controls this, but there isn't.)\n\nI have indexes on almost every one of my columns, but in many cases the\noptimiser always chooses hash joins. Just today, I found a query which\nwouldn't run with the SET cost hacks and lo and behold it works perfectly\nwith -fh. Do the values for the cost of merge join need changing?\n\nThe merge join is great, does 6.5 use it more or less than 6.4 for joins\nwith indexes? From what I've read, 6.5 is using more hash joins. I'm going\nto put the -fh switch into my little toolbox of hacks to try for special\noccasions when all else fails :) \n\n\n> Again, that doesn't directly prevent the optimizer from using hash\n> joins, it just skews the cost estimates so that index scans will be used\n> in preference to sequential scans, whereupon you get silly plans like\n> the one you quoted:\n> \n> > Hash Join (cost=30000000.00 size=43361 width=20)\n> > -> Index Scan using unix_snum_inv_index on unix\n> > (cost=20311001006080.00 size=20311 width=16)\n> > -> Hash (cost=0.00 size=0 width=0)\n> > -> Index Scan using services_snum_inv_index on services\n> > (cost=43360999964672.00 size=43361 width=4)\n> \n> This is silly because hash join doesn't care whether its inputs are\n> sorted or not --- the extra cost of scanning the index is just being\n> wasted here. (Unless you have WHERE conditions that can be combined\n> with the index to allow not scanning the whole table. From the\n> size estimates it looks like that might be happening for services,\n> but not for the unix table, so the index scan on unix is definitely\n> a waste of time.)\n\nI did it as a way of tricking the optimiser into doing something that made\nit work. When I first tried it, I didn't really understand what I was\ndoing, but when it made my queries run that previously wouldn't work, I\njust started using it without question because I needed to get out of a\njam :)\n\nHmmm, I'll have to go through some more examples to check on this, maybe\nthe above example I gave was wrong ....\n\nI did a more complicated query which does a join plus some extra\nconditions:\n\n*** Hash table out of memory\nHash Join (cost=4662.25 size=4317 width=320)\n -> Seq Scan on unix (cost=1212.26 size=20310 width=204)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on services (cost=2204.91 size=4317 width=116)\n\n*** Works with set cost hack\nHash Join (cost=30000000.00 size=4317 width=320)\n -> Seq Scan on services (cost=4336099929358336.00 size=4317 width=116)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on unix (cost=2031099966390272.00 size=20310 width=204)\n\n*** Works with forbid hash \"-fh\"\nNested Loop (cost=11054.74 size=4317 width=320)\n -> Seq Scan on services (cost=2204.91 size=4317 width=116)\n -> Index Scan using unix_snum_inv_index on unix (cost=2.05 size=20310 width=204)\n\n\nThe extra conditions is that services.snum = services.id and\nservices.snum = unix.snum and services.invalid = 0 and unix.invalid = 0\n\nSo the only difference is that it is scanning them in a different order,\nwhich is making it work ok. It could be caused by the size/cost not being\ncorrect for the data actually there?\n\nI guess with the hash table fixes this problem should go away, although I\nam thinking about disabling hash joins in a few places to make things more\nreliable though for now .... Hmmm, something to think about :) Others\nwho are having trouble might want to try this as well?\n\n\n> Using an index scan *is* a good idea for merge join, on the other hand,\n> because merge join requires sorted inputs. (So if you don't have a\n> suitable index, the plan will have to include an explicit sort step\n> before the join.)\n\nIs merge sort immune from the hash table out of memory thing? If so, then\nI can just pay the cost of the sort and live with it, knowing that my\nqueries will always work?\n\n> > One other problem related to the shared memory buffers is every so often,\n> > the postmaster will die with shared memory errors, and device full.\n> \n> Er, could you quote the exact error reports? This is too vague to allow\n> any conclusions.\n\nI couldn't find any log files when I wrote the last email, but I had a bad\nday today, (postmaster failed 3 times in quick succession) - I had waiting\nbackends, then I restarted, then had the problem again, then restarted,\nthen another restart :) and the output was:\n\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432017, num=16, permission=600\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\n\nFATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\nFATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\nDEBUG: DumpAllLocks: xidLook->tag.lock = NULL\n\n\n\nAt this point, the postmaster is dead, and there are lots of postgres\nchildren laying around, all still happy to process queries by the way,\nwhich is pretty neat. But bad because there is no postmaster to control\nthem.\n\nSo I kill everything off, and restart, and everything is ok again (The\nindexes on a table got messed up, but I fixed that by dropping them and\nrecreating them).\n\nThe DumpAllLocks implies that things are pretty busy and the deadlock code\nis performing checks .... Any reason why it prints NULL, I would have\nthought it would print something there like the name of a lock?\n\n\n\nAnd another one:\n\n\nIpcSemaphoreCreate: semget failed (No space left on device) key=5432015,\nnum=16, permission=600\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to\nterminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\nFATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\n\nFATAL: s_lock(18001065) at spin.c:125, stuck spinlock. Aborting.\n\n\n\n\n\nThen another one after restarting everything:\n\nERROR: cannot open segment 1 of relation sessions_done_id_index\n\nFATAL: s_lock(1800d37c) at bufmgr.c:657, stuck spinlock. Aborting.\n\nFATAL: s_lock(1800d37c) at bufmgr.c:657, stuck spinlock. Aborting.\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to\nterminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n\n\nThis is where I dropped the index sessions_done_id_index.\n\nI didn't run the ipcclean program, but I've never had to run it before,\nand everything always starts up ok. After the third restart, everything\nwas ok. I think the dying was caused by too many things trying to happen\nat once.\n\n\nNote that the above errors are a normal 6.4.2 with no patches - I am not\nrunning Tom's patch yet. The whole affair was cause by me running a\nprogram which moves 20000 or so tuples from one table to another with\n500000 tuples. It takes a few minutes and thrashes the I/O on the machine\n- postgres died during this process.\n\n\n\n> > Quick question: when you do a query with a join why does the hash table\n> > code need to use shared memory? Can't it do the join within its own memory\n> > space?\n> \n> It doesn't use shared memory. It's just that for historical reasons,\n> the amount of private memory allocated for a hashjoin table is the same\n> as the amount of shared memory allocated for buffers (ie, the -B\n> switch). I've been thinking of changing it to be driven by the -S\n> switch instead, since that seems to make more sense.\n\nAhhh, cool ... Ok, that explains it then :)\n\n\nThankyou very much for you help again, I hope the above is useful for\neveryone to look at. I'm not sure how much has been fixed in 6.5 but I\nwant to make sure that if they still exist in 6.5beta we can flush them\nout.\n\nbye,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n\n",
"msg_date": "Mon, 10 May 1999 22:00:44 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: INSERT/UPDATE waiting (another example)"
},
{
"msg_contents": "Hello all,\n\n>\n> Hi everyone!\n>\n>\n> Tom Lane Writes:\n>\n> > Wayne Piekarski <[email protected]> writes:\n> > > Currently, I start up postmaster with -B 192, which I guess\n> puts it below\n> > > the value of 256 which causes problems. Apart from when I got past 256\n> > > buffers, does the patch fix anything else that might be\n> causing problems?\n> >\n\n[snip]\n\n>\n> Then another one after restarting everything:\n>\n> ERROR: cannot open segment 1 of relation sessions_done_id_index\n>\n\nI got the same error in my test cases.\nI don't understand the cause of this error.\n\nBut it seems I found another problem instead.\n\n spinlock io_in_progress_lock of a buffer page is not\n released by operations called by elog() such as\n ProcReleaseSpins(),ResetBufferPool() etc.\n\n For example,the error we have encountered probably occured\n in ReadBufferWithBufferLock().\n When elog(ERROR/FATAL) occurs in smgrread/extend() which\n is called from ReadBufferWithBufferLock(),smgrread/extend()\n don't release the io_in_progress_lock spinlock of the page.\n If other transactions get that page as a free Buffer page,those\n transactions wait the release of io_in_progress_lock spinlock\n and would abort with message such as\n\n> FATAL: s_lock(1800d37c) at bufmgr.c:657, stuck spinlock. Aborting.\n>\n> FATAL: s_lock(1800d37c) at bufmgr.c:657, stuck spinlock. Aborting.\n\nComments ?\n\nI don't know details about spinlock stuff.\nSorry,if my thought is off the point.\n\nAnd I have another question.\n\nIt seems elog(FATAL) doesn't release allocated buffer pages.\nIt's OK ?\nAFAIC elog(FATAL) causes proc_exit(0) and proc_exit() doesn't\ncall ResetBufferPool().\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Thu, 13 May 1999 19:28:49 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "[HACKERS] spinlock freeze ?(Re: INSERT/UPDATE waiting (another\n\texample))"
},
{
"msg_contents": "Hi all,\nThis mail is about the original cause of [HACKERS] spinlock freeze ?(Re:\nINSERT/UPDATE waiting (another example)).\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Hiroshi Inoue\n> Sent: Thursday, May 13, 1999 7:29 PM\n> To: Tom Lane; Wayne Piekarski\n> Cc: [email protected]\n> Subject: [HACKERS] spinlock freeze ?(Re: INSERT/UPDATE waiting (another\n> example))\n>\n>\n\n[snip]\n\n> >\n> > Then another one after restarting everything:\n> >\n> > ERROR: cannot open segment 1 of relation sessions_done_id_index\n> >\n>\n> I got the same error in my test cases.\n> I don't understand the cause of this error.\n>\n\nI got this error message by dropping a table while concurrent transactions\ninserting rows to the same table.\n\nI think other transactions should abort with message \"Table does not\nexist\". But in most cases the result is not so.\n\nIt seems that other transactions could proceed before DROP TABLE\ncommand is completed.\n\nAFAIC heap_destroy_with_catalog() acquires AccessExclusiveLock and\nreleases the lock inside the function.\n\nI think that heap_destroy_with_catalog() (or upper level function) should\nnot\nrelease the lock.\n\nComments ?\n\nHiroshi Inoue\[email protected]\n\n",
"msg_date": "Mon, 17 May 1999 11:30:45 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "DROP TABLE does not drop a table completely"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > >\n> > > ERROR: cannot open segment 1 of relation sessions_done_id_index\n> > >\n> >\n> > I got the same error in my test cases.\n> > I don't understand the cause of this error.\n> >\n> \n> I got this error message by dropping a table while concurrent transactions\n> inserting rows to the same table.\n> \n> I think other transactions should abort with message \"Table does not\n> exist\". But in most cases the result is not so.\n> \n> It seems that other transactions could proceed before DROP TABLE\n> command is completed.\n> \n> AFAIC heap_destroy_with_catalog() acquires AccessExclusiveLock and\n> releases the lock inside the function.\n> \n> I think that heap_destroy_with_catalog() (or upper level function) should\n> not\n> release the lock.\n\nYou're right - this should be done keeping in mind that DROP is allowed\ninside BEGIN/END (long transactions), but I'm not sure that this\nwill help generally: does it matter when unlock dropped relation -\nin heap_destroy_with_catalog() or in commit? The real problem is\nthat heap/index_open open file _before_ acquiring any locks and\ndoesn't check t_xmax of relation/index tuple. I believe that\nthis is old problem. There are another ones, sure. \nCatalog cache code must be re-designed.\n\nVadim\n",
"msg_date": "Mon, 17 May 1999 15:01:19 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE does not drop a table completely"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Vadim\n> Mikheev\n> Sent: Monday, May 17, 1999 4:01 PM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] DROP TABLE does not drop a table completely\n> \n> \n> Hiroshi Inoue wrote:\n> > \n> > > >\n> > > > ERROR: cannot open segment 1 of relation sessions_done_id_index\n> > > >\n> > >\n> > > I got the same error in my test cases.\n> > > I don't understand the cause of this error.\n> > >\n> > \n> > I got this error message by dropping a table while concurrent \n> transactions\n> > inserting rows to the same table.\n> > \n> > I think other transactions should abort with message \"Table does not\n> > exist\". But in most cases the result is not so.\n> > \n> > It seems that other transactions could proceed before DROP TABLE\n> > command is completed.\n> > \n> > AFAIC heap_destroy_with_catalog() acquires AccessExclusiveLock and\n> > releases the lock inside the function.\n> > \n> > I think that heap_destroy_with_catalog() (or upper level \n> function) should\n> > not\n> > release the lock.\n> \n> You're right - this should be done keeping in mind that DROP is allowed\n> inside BEGIN/END (long transactions), but I'm not sure that this\n> will help generally: does it matter when unlock dropped relation -\n> in heap_destroy_with_catalog() or in commit?\n\nUnlocking dropped relation before commit enables other transactions \nproceed and the transactions regard the relation as still alive before the\n\t\t\t\t\t\t ^^^^^^^^^ \ncommit of DROP TABLE command(It's right,I think). As a result,those \ntransactions behave strangely,though I don't know more about the \nreason.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Mon, 17 May 1999 16:40:56 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] DROP TABLE does not drop a table completely"
},
{
"msg_contents": "Hi all,\n\nProcReleaseSpins() does nothing unless MyProc is set.\nSo both elog(ERROR/FATAL) and proc_exit(0) before \nInitProcess() don't release spinlocks.\n\nComments ?\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Tue, 18 May 1999 19:36:40 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "spinlock freeze again"
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> ProcReleaseSpins() does nothing unless MyProc is set.\n> So both elog(ERROR/FATAL) and proc_exit(0) before \n> InitProcess() don't release spinlocks.\n\nAre their any locks acquired before InitProcess()?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 14:20:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] spinlock freeze again"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Thursday, July 08, 1999 3:21 AM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] spinlock freeze again\n> \n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Hi all,\n> > \n> > ProcReleaseSpins() does nothing unless MyProc is set.\n> > So both elog(ERROR/FATAL) and proc_exit(0) before \n> > InitProcess() don't release spinlocks.\n> \n> Are their any locks acquired before InitProcess()?\n>\n\nOidGenLockId spinlock is acquired in InitTransactionLog().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 8 Jul 1999 08:50:53 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] spinlock freeze again"
},
{
"msg_contents": "> > Are their any locks acquired before InitProcess()?\n> >\n> \n> OidGenLockId spinlock is acquired in InitTransactionLog().\n> \n> Regards.\n> \n\nWell, seems we have a Proc queue that holds locks, but for these other\ncases, we don't. We could use the on_shmexit queue to add an cleanup\nhandler once we get the lock, and remove it from the queue once we\nrelease the lock. We don't currently have the ability to remove\nspecific queue entries, but we could easily do that.\n\nIs the lock failure a problem that happens a lot?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 21:09:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] spinlock freeze again"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Thursday, July 08, 1999 10:09 AM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] spinlock freeze again\n> \n> \n> > > Are their any locks acquired before InitProcess()?\n> > >\n> > \n> > OidGenLockId spinlock is acquired in InitTransactionLog().\n> > \n> > Regards.\n> > \n> \n> Well, seems we have a Proc queue that holds locks, but for these other\n> cases, we don't. We could use the on_shmexit queue to add an cleanup\n> handler once we get the lock, and remove it from the queue once we\n> release the lock. We don't currently have the ability to remove\n> specific queue entries, but we could easily do that.\n> \n> Is the lock failure a problem that happens a lot?\n>\n\nIt doesn't happen oridinarily.\nI don't remember well how it happend.\nProbably it was caused by other spinlock(io_in_progress_lock ?)\nfreeze while testing 6.5-beta.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Thu, 8 Jul 1999 11:07:10 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] spinlock freeze again"
},
{
"msg_contents": "> It doesn't happen oridinarily.\n> I don't remember well how it happend.\n> Probably it was caused by other spinlock(io_in_progress_lock ?)\n> freeze while testing 6.5-beta.\n\nOK, let's see if it happens again.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Jul 1999 23:01:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] spinlock freeze again"
}
] |
[
{
"msg_contents": "> Does your chanage in LockResolveConflicts() work fine ?\n> \n> if (SHMQueueEmpty(&MyProc->lockQueue) && waitQueue->size &&\n> topproc->prio > myprio)\n> {\n> \n> First, LockResolveConflicts() is called not only from LockAcquire() but also\n> from ProcLockWakeup(). ProcLockWakeup() is called from a lock releasing\n> process. Does it make sense to check MyProc->lockQueue ?\n> \n> Second,when LockAcquire() calls LockResolveConflicts(),MyProc->lockQueue\n> is always not empty. So does it make sense too ?\n\nSeems it does not work, as Vadim has pointed out. Seems he wants to\nwork on fixing this.\n\nI am curious what lock is in the lockQueue when it is called from\nLockAcquire()?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 May 1999 01:25:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] can't compile"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Seems it does not work, as Vadim has pointed out. Seems he wants to\n> work on fixing this.\n\nI just updated CVS but can't commit my changes -:(\ncvs commit run with no output for long time.\n\n?\n\nVadim\n",
"msg_date": "Thu, 06 May 1999 15:27:29 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] can't compile"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Seems it does not work, as Vadim has pointed out. Seems he wants to\n> > work on fixing this.\n> \n> I just updated CVS but can't commit my changes -:(\n> cvs commit run with no output for long time.\n\nThat happens to me sometimes. I just restart the cvs and it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 May 1999 03:38:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] can't compile"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > Seems it does not work, as Vadim has pointed out. Seems he wants to\n> > > work on fixing this.\n> >\n> > I just updated CVS but can't commit my changes -:(\n> > cvs commit run with no output for long time.\n> \n> That happens to me sometimes. I just restart the cvs and it works.\n\nIt doesn't help -:(\n\nVadim\n",
"msg_date": "Thu, 06 May 1999 16:56:11 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] can't compile"
},
{
"msg_contents": "On Thu, 6 May 1999, Vadim Mikheev wrote:\n\n> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian wrote:\n> > > >\n> > > > Seems it does not work, as Vadim has pointed out. Seems he wants to\n> > > > work on fixing this.\n> > >\n> > > I just updated CVS but can't commit my changes -:(\n> > > cvs commit run with no output for long time.\n> > \n> > That happens to me sometimes. I just restart the cvs and it works.\n> \n> It doesn't help -:(\n\nDoes a 'truss' on the process show anything unusual? ping to hub.org?\nJust trying to run it here now just to make sure its not a 'local'\nproblem...\n\nYup, all *appears* fine over here...network latency?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 6 May 1999 09:15:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] can't compile"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> \n> Does a 'truss' on the process show anything unusual? ping to hub.org?\n> Just trying to run it here now just to make sure its not a 'local'\n> problem...\n> \n> Yup, all *appears* fine over here...network latency?\n\nSeems yes. Yesterday 60% of packets was lost, today - 0% and\ncommitted :)\n\nVadim\n",
"msg_date": "Fri, 07 May 1999 09:54:31 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] can't compile"
}
] |
[
{
"msg_contents": "Hi,\nthis is a good day to ask this, or NOT??\n\nHow I can use the SET type, for to simulate the array behaivor????\n\nThis type can be used for to represent a multivalued field in a table\n????\n\nCarlos Peralta Ramirez Thanks you !!!!\n\n\n",
"msg_date": "Thu, 06 May 1999 09:24:01 -0500",
"msg_from": "Carlos Peralta Ramirez <[email protected]>",
"msg_from_op": true,
"msg_subject": "the today question !!!"
}
] |
[
{
"msg_contents": "Here is the background on this email (and other like it). My corporate\noffice changed my email address 4 times in a 3 week period (company mergers\nand such). In order to send mail, I was forced to subscribe to each list\nusing my new email addresses. I then had three email addresses registered\nto each list. The corporate office then changed my email address again\nbecause another individual in the company was assigned my current email\naddress. Apparently he had been with the company longer. Now I am getting\ntwo copies of every email and he gets a copy of every email. I had been\ntrying for weeks to unsubscribe the extra emails from the lists but either\nthe unsubscribe was not supported, would fail, or just would not work. I\nfinally emailed the owner of the lists and he has corrected the problem.\nThe individual getting these messages apparently had enough of these email\nand started returning them back to the sender. Sorry for the inconvenience.\nIt should not happen again. I don't think this was an intended spam.\n\nThanks, Michael\n\n\t-----Original Message-----\n\tFrom:\tThe Hermit Hacker [SMTP:[email protected]]\n\tSent:\tThursday, May 06, 1999 8:40 AM\n\tTo:\tMichael Davis\n\tCc:\tHiroshi Inoue; Vadim Mikheev; Postgresql Developers List\n\tSubject:\tRE: [HACKERS] I'm planning some changes in lmgr...\n\n\n\tYa know, its almost tempting to send /kernel to ppl that spam lists\nlike\n\tthis :(\n\n\n\tOn Wed, 5 May 1999, Michael Davis wrote:\n\n\t> \tYour e-mail did not arrive at its intended destination. You\nneed to\n\t> send it to Michael J. Davis, not Michael Davis.\n\t> \n\t> \n\t> \tFrom:\tHiroshi Inoue <Inoue @ tpf.co.jp> on 05/04/99 10:17\nPM\n\t> \tTo:\tVadim Mikheev <vadim @ krs.ru>@SMTP@EXCHANGE,\nPostgreSQL\n\t> Developers List <hackers @ postgreSQL.org>@SMTP@EXCHANGE\n\t> \tcc:\t \n\t> \tSubject:\tRE: [HACKERS] I'm planning some changes in\nlmgr...\n\t> \n\t> \t> -----Original Message-----\n\t> \t> From: [email protected]\n\t> \t> [mailto:[email protected]]On Behalf Of\nVadim\n\t> Mikheev\n\t> \t> Sent: Sunday, May 02, 1999 12:23 AM\n\t> \t> To: PostgreSQL Developers List\n\t> \t> Subject: [HACKERS] I'm planning some changes in lmgr...\n\t> \t> \n\t> \t> \n\t> \t> but have no time to do them today and tomorrow -:(.\n\t> \t> \n\t> \t> 1. Add int waitMask to LOCK to speedup checking in\n\t> LockResolveConflicts:\n\t> \t> if lock requested conflicts with lock requested by any\nwaiter \n\t> \t> (and we haven't any lock on this object) -> sleep\n\t> \t> \n\t> \t> 2. Add int holdLock (or use prio) to PROC to let other\nknow\n\t> \t> what locks we hold on object (described by\nPROC->waitLock)\n\t> \t> while we're waiting for lock of PROC->token type on\n\t> \t> this object.\n\t> \t> \n\t> \t> I assume that holdLock & token will let us properly \n\t> \t> and efficiently order waiters in LOCK->waitProcs queue\n\t> \t> (if we don't hold any lock on object -> go after\n\t> \t> all waiters with holdLock > 0, etc etc etc).\n\t> \t> \n\t> \t> Comments?\n\t> \t>\n\t> \n\t> \tFirst, I agree to check conflicts for ( total - own )\nhodling lock\n\t> of \n\t> \tthe target object if transaction has already hold some lock\non the \n\t> \tobject and when some conflicts are detected,the transaction \n\t> \tshould be queued with higher priority than transactions\nwhich hold \n\t> \tno lock on the object.\n\t> \n\t> \tSecondly, if a transaction holds no lock on the object, we\nshould \n\t> \tcheck conflicts for ( holding + waiting ) lock of the\nobject.\n\t> \n\t> \tAnd I have a question as to the priority of queueing.\n\t> \tDoes the current definition of priority mean the urgency \n\t> \tof lock ?\n\t> \n\t> \tIt may prevent lock escalation in some cases.\n\t> \tBut is it effective to avoid deadlocks ? \n\t> \tIt's difficult for me to find such a case.\n\t> \n\t> \tThanks.\n\t> \n\t> \tHiroshi Inoue\n\t> \[email protected]\n\t> \n\t> \n\t> \n\t> \n\t> \n\n\tMarc G. Fournier ICQ#7615664 IRC\nNick: Scrappy\n\tSystems Administrator @ hub.org \n\tprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\t\n",
"msg_date": "Thu, 6 May 1999 10:20:02 -0500 ",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] I'm planning some changes in lmgr..."
}
] |
[
{
"msg_contents": "I've committed fixes that deal with all of the coredump problems\nI could find in nodeHash.c (there were several :-().\n\nBut the code still has a fundamental design flaw: it uses a fixed-size\noverflow area to hold tuples that don't fit into the hashbuckets they\nare assigned to. This means you get \"hashtable out of memory\" errors\nif the distribution of tuples is skewed enough, or if the number of\nhashbuckets is too small because the system underestimated the number\nof tuples in the relation. Better than a coredump I suppose, but still\nvery bad, especially since the optimizer likes to use hashjoins more\nthan it used to.\n\nWhat I would like to do to fix this is to store the tuples in a Portal\ninstead of in a fixed-size palloc block. While at it, I'd rip out the\n\"relative vs. absolute address\" cruft that is in the code now.\n(Apparently there was once some thought of using a shared memory block\nso that multiple processes could share the work of a hashjoin. All that\nremains is ugly, bug-prone code ...)\n\nThe reason I bring all this up is that it'd be a nontrivial revision\nto nodeHash.c, and I'm uncomfortable with the notion of making such a\nchange this late in a beta cycle. On the other hand it *is* a bug fix,\nand a fairly important one IMHO.\n\nOpinions? Should I plow ahead, or leave this to fix after 6.5 release?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 May 1999 12:12:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hashjoin status report"
},
{
"msg_contents": "On Thu, 6 May 1999, Tom Lane wrote:\n\n> I've committed fixes that deal with all of the coredump problems\n> I could find in nodeHash.c (there were several :-().\n> \n> But the code still has a fundamental design flaw: it uses a fixed-size\n> overflow area to hold tuples that don't fit into the hashbuckets they\n> are assigned to. This means you get \"hashtable out of memory\" errors\n> if the distribution of tuples is skewed enough, or if the number of\n> hashbuckets is too small because the system underestimated the number\n> of tuples in the relation. Better than a coredump I suppose, but still\n> very bad, especially since the optimizer likes to use hashjoins more\n> than it used to.\n> \n> What I would like to do to fix this is to store the tuples in a Portal\n> instead of in a fixed-size palloc block. While at it, I'd rip out the\n> \"relative vs. absolute address\" cruft that is in the code now.\n> (Apparently there was once some thought of using a shared memory block\n> so that multiple processes could share the work of a hashjoin. All that\n> remains is ugly, bug-prone code ...)\n> \n> The reason I bring all this up is that it'd be a nontrivial revision\n> to nodeHash.c, and I'm uncomfortable with the notion of making such a\n> change this late in a beta cycle. On the other hand it *is* a bug fix,\n> and a fairly important one IMHO.\n> \n> Opinions? Should I plow ahead, or leave this to fix after 6.5 release?\n\nEstimate of time involved to fix this? vs likelihood of someone\ntriggering the bug in production?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 6 May 1999 14:23:29 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report"
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Opinions? Should I plow ahead, or leave this to fix after 6.5 release?\n\n> Estimate of time involved to fix this? vs likelihood of someone\n> triggering the bug in production?\n\nI could probably get the coding done this weekend, unless something else\ncomes up to distract me. It's the question of how much testing it'd\nreceive before release that worries me...\n\nAs for the likelihood, that's hard to say. It's very easy to trigger\nthe bug as a test case. (Arrange for a hashjoin where the inner table\nhas a lot of identical rows, or at least many sets of more-than-10-\nrows-with-the-same-value-in-the-field-being-hashed-on.) In real life\nyou'd like to think that that's pretty improbable.\n\nWhat started this go-round was Contzen's report of seeing the\n\"hash table out of memory. Use -B parameter to increase buffers\"\nmessage in what was evidently a real-life scenario. So it can happen.\nDo you recall having seen many complaints about that error before?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 May 1999 17:03:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "I said:\n> What started this go-round was Contzen's report of seeing the\n> \"hash table out of memory. Use -B parameter to increase buffers\"\n> message in what was evidently a real-life scenario. So it can happen.\n> Do you recall having seen many complaints about that error before?\n\nA little bit of poking through the mailing list archives found four\nother complaints in the past six months (scattered through the sql,\nadmin, and novices lists). There might have been more that my search\nquery missed.\n\nWhat do you think, does that reach your threshold of pain or not?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 May 1999 17:33:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "On Thu, 6 May 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> Opinions? Should I plow ahead, or leave this to fix after 6.5 release?\n> \n> > Estimate of time involved to fix this? vs likelihood of someone\n> > triggering the bug in production?\n> \n> I could probably get the coding done this weekend, unless something else\n> comes up to distract me. It's the question of how much testing it'd\n> receive before release that worries me...\n\nWe're looking at 3 weeks till release...I'll let you call it on this one.\nIf you feel confident about getting the bug fixed before release, with\nenough time for testing, go for it. If it makes more sense to make it an\n'untested patch', go for that one.\n\nI believe you can fix this bug, I'm just kinda nervous about adding a\ndifferent one, which I believe is your worry also, with the limited amount\nof testing possible :(\n\nMy preference would be a \"use at own risk\" patch...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 6 May 1999 22:33:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I've committed fixes that deal with all of the coredump problems\n> I could find in nodeHash.c (there were several :-().\n> \n> But the code still has a fundamental design flaw: it uses a fixed-size\n> overflow area to hold tuples that don't fit into the hashbuckets they\n> are assigned to. This means you get \"hashtable out of memory\" errors\n> if the distribution of tuples is skewed enough, or if the number of\n> hashbuckets is too small because the system underestimated the number\n> of tuples in the relation. Better than a coredump I suppose, but still\n> very bad, especially since the optimizer likes to use hashjoins more\n> than it used to.\n> \n> What I would like to do to fix this is to store the tuples in a Portal\n> instead of in a fixed-size palloc block. While at it, I'd rip out the\n> \"relative vs. absolute address\" cruft that is in the code now.\n> (Apparently there was once some thought of using a shared memory block\n> so that multiple processes could share the work of a hashjoin. All that\n> remains is ugly, bug-prone code ...)\n\nFix it! Testing is easy...\nThough, I would use chain of big blocks for overflow area,\nnot Portal - it's too big thing to be directly used in join method.\n\nVadim\n",
"msg_date": "Fri, 07 May 1999 09:51:58 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report"
},
{
"msg_contents": "On Thu, 6 May 1999, Tom Lane wrote:\n\n> I said:\n> > What started this go-round was Contzen's report of seeing the\n> > \"hash table out of memory. Use -B parameter to increase buffers\"\n> > message in what was evidently a real-life scenario. So it can happen.\n> > Do you recall having seen many complaints about that error before?\n> \n> A little bit of poking through the mailing list archives found four\n> other complaints in the past six months (scattered through the sql,\n> admin, and novices lists). There might have been more that my search\n> query missed.\n> \n> What do you think, does that reach your threshold of pain or not?\n\nConsidering the number of instances we have out there, it sounds like a\nvery rare 'tweak' to the bug...I'd say keep it as an 'untested patch' for\nthe release, for those that wish to try it if they ahve a problem, and\ninclude it for v6.6 and v6.5.1 ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 6 May 1999 22:54:45 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> >> Opinions? Should I plow ahead, or leave this to fix after 6.5 release?\n> \n> > Estimate of time involved to fix this? vs likelihood of someone\n> > triggering the bug in production?\n> \n> I could probably get the coding done this weekend, unless something else\n> comes up to distract me. It's the question of how much testing it'd\n> receive before release that worries me...\n> \n> As for the likelihood, that's hard to say. It's very easy to trigger\n> the bug as a test case. (Arrange for a hashjoin where the inner table\n> has a lot of identical rows, or at least many sets of more-than-10-\n> rows-with-the-same-value-in-the-field-being-hashed-on.) In real life\n> you'd like to think that that's pretty improbable.\n> \n> What started this go-round was Contzen's report of seeing the\n> \"hash table out of memory. Use -B parameter to increase buffers\"\n> message in what was evidently a real-life scenario. So it can happen.\n> Do you recall having seen many complaints about that error before?\n\nWe already have a good example for this \"hash table out of memory. Use\n-B parameter to increase buffers\" syndrome in our source tree. Go\nsrc/test/bench, remove \"-B 256\" from the last line of runwisc.sh then\nrun the test. The \"-B 256\" used to not be in there. That was added by\nme while fixing the test suit and elog() (see included posting). I\ndon't see the error message in 6.4.2. I guess this is due to the\nchange of the optimizer.\n\nIMHO, we should fix this before 6.5 is out, or should change the\ndefault settings of -B to 256 or so, this may cause short of shmem,\nhowever.\n\nP.S. At that time I misunderstood in that I didn't have enough sort\nmemory :-<\n\n>Message-Id: <[email protected]>\n>From: Tatsuo Ishii <[email protected]>\n>To: [email protected]\n>Subject: [HACKERS] elog() and wisconsin bench test fix\n>Date: Fri, 16 Apr 1999 15:54:16 +0900\n>\n>I have modified elog() so that it uses its own pid(using getpid()) as\n>the first parameter for kill() in some cases. It used to get its own\n>pid from MyProcPid global variable. This was fine until I ran the\n>wisconsin benchmark test suit (test/bench/). In the test, postgres is\n>run as a command and MyProcPid is set to 0. As a result elog() calls\n>kill() with the first parameter being set to 0 and SIGQUIT was issued\n>to the process group, not the postgres process itself! This was why\n>/bin/sh got core dumped whenever I ran the bench test.\n>\n>Also, I fixed several bugs in the test quries.\n>\n>One thing still remains is some queries fail due to insufficient sort\n>memory. I modified the test script adding -B option. But is this\n>normal? I think not. I thought postgres should use disk files instead\n>of memory if there's enough sort buffer.\n>\n>Comments?\n>--\n>Tatsuo Ishii\n",
"msg_date": "Fri, 07 May 1999 11:31:43 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 6 May 1999, Tom Lane wrote:\n>> What do you think, does that reach your threshold of pain or not?\n\n> Considering the number of instances we have out there, it sounds like a\n> very rare 'tweak' to the bug...I'd say keep it as an 'untested patch' for\n> the release, for those that wish to try it if they ahve a problem, and\n> include it for v6.6 and v6.5.1 ...\n\nOK, we'll deal with it as a post-release patch then. It's not like\nI haven't got anything else to do before the end of the month :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 May 1999 22:36:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> >> Opinions? Should I plow ahead, or leave this to fix after 6.5 release?\n> \n> > Estimate of time involved to fix this? vs likelihood of someone\n> > triggering the bug in production?\n> \n> I could probably get the coding done this weekend, unless something else\n> comes up to distract me. It's the question of how much testing it'd\n> receive before release that worries me...\n> \n> As for the likelihood, that's hard to say. It's very easy to trigger\n> the bug as a test case. (Arrange for a hashjoin where the inner table\n> has a lot of identical rows, or at least many sets of more-than-10-\n> rows-with-the-same-value-in-the-field-being-hashed-on.) In real life\n> you'd like to think that that's pretty improbable.\n> \n> What started this go-round was Contzen's report of seeing the\n> \"hash table out of memory. Use -B parameter to increase buffers\"\n> message in what was evidently a real-life scenario. So it can happen.\n> Do you recall having seen many complaints about that error before?\n\nNew optimizer does more hashjoins, so we will see it more often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 May 1999 23:49:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> We already have a good example for this \"hash table out of memory. Use\n> -B parameter to increase buffers\" syndrome in our source tree. Go\n> src/test/bench, remove \"-B 256\" from the last line of runwisc.sh then\n> run the test. The \"-B 256\" used to not be in there. That was added by\n> me while fixing the test suit and elog() (see included posting). I\n> don't see the error message in 6.4.2. I guess this is due to the\n> change of the optimizer.\n\n> IMHO, we should fix this before 6.5 is out, or should change the\n> default settings of -B to 256 or so, this may cause short of shmem,\n> however.\n\nIt's fixed --- do you want to remove -B 256 from the test suite?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 May 1999 14:06:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
}
] |
[
{
"msg_contents": "\nAm I right in saying that the -o and -D arguments to pg_dump cannot work\ntogether? Any chance of this getting fixed?\n\nOtherwise is there any other way of deleting a column from a table\nwhilst retaining oids? In general there seems there are problems with\nvarious scheme changes that you may want to do if you need to retain\noids. Various SELECT INTO options don't work any more unless there is\nsome way to set the oid in conjunction with named fields (like the -D\noption).\n",
"msg_date": "Fri, 07 May 1999 03:36:59 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump problem?"
},
{
"msg_contents": "Hi!\n\nI'm trying to dump and restore my database which is a 6.5 May 2nd\nsnapshot, but psql is barfing on pg_dump's output. Naturally I find that\nquite disturbing! I'd like to find out how I can salvage my data,\nbecause right now I havn't got a way of backing it up properly. pg_dump\n-D |psql can re-insert my data, but with the loss of oids, and my schema\nrelies on oids. If anyone wants the full pg_dump data let me know.\npg_dump -o |psql results in the errors.....\n\nThe first one, it looks\n\nCOPY \"urllink\" WITH OIDS FROM stdin;\nERROR: pg_atoi: error in \"http://www.photogs.com/bwworld/f5.html\":\ncan't parse\n\"http://www.photogs.com/bwworld/f5.html\"\nPQendcopy: resetting connection\n\nThis was caused by the following input\nCOPY \"urllink\" WITH OIDS FROM stdin;\n24265 \\N Review of Nikon F5 \\N \\N \\N 24065 \nhttp://www.photogs.com/bwworld/f5.html t \n\n\nIt looks like maybe postgres is expecting an integer and getting a\nstring maybe?\n\nOne thing I did which was a little unusual is that I did an ALTER TABLE\nfoo ADD COLUMN, but I should have said ALTER TABLE foo* ADD COLUMN to\nget the column on inherited attributes. The only solution I could think\nof was to go and add the attribute to all the sub-classes too. This\nseemed to work (is this what I should have done?), but I don't know if\nthis might be related to this problem. Maybe postgres is confused now\nabout column orders?? So I wanted desperately to do a pg_dump -D -o, but\n-D stops -o from working (Yuk! This really need to be fixed!)\n\n(Please give us DROP COLUMN soon! :-)\n\n\nThe other error looks to be something to do with views...\n\nCREATE RULE \"_RETproductv\" AS ON SELECT TO \"productv\" WHERE DO INSTEAD\nSELECT \"\noid\" AS \"oidv\", \"type\", \"title\", \"summary\", \"body\", \"image\", \"category\",\n\"mfrcod\ne\", \"mfr\", \"costprice\", \"taxrate\", \"profit\", \"rrprice\", \"taxrate\" *\n\"costprice\"\nAS \"tax\", \"costprice\" + \"profit\" AS \"exsaleprice\", \"costprice\" +\n\"profit\" + \"tax\nrate\" * \"costprice\" AS \"saleprice\" FROM \"product\";\nERROR: parser: parse error at or near \"do\"\nCREATE RULE \"_RETorderitemv\" AS ON SELECT TO \"orderitemv\" WHERE DO\nINSTEAD SELE\nCT \"oid\" AS \"oidv\", \"product\", \"webuser\", \"quantity\", \"price\",\n\"taxfree\", \"order\nstatus\", \"orderdatetime\", \"shipdatetime\", \"price\" * \"quantity\" AS\n\"totalprice\" F\nROM \"orderitem\";\nERROR: parser: parse error at or near \"do\"\n",
"msg_date": "Fri, 07 May 1999 04:54:51 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:pg_dump barfs?"
},
{
"msg_contents": "\nAs a follow-up to this, I tried creating a new database from the\noriginal CREATE TABLE statements, with the additional field added to the\nCREATE TABLE which I had previously used an ALTER TABLE to add.\n\nI found that the fields came out in a different order when I do a SELECT\n* FROM urllink.\n\nThis re-enforces my theory that postgres is confused about field orders,\nand that there is a bad interaction between ALTER TABLE ADD COLUMN and\nany database use which assumes a particular column ordering. In my\nopinion, any useful SQL must specify columns in order to be reliable\n(even COPY). Unfortunately, COPY does not allow you to specify column\nnames, and INSERT does not allow you to retain oids, thus I am screwed\nright now. Any suggestions on how to salvage my data still welcome :-).\n\n\nChris Bitmead wrote:\n> \n> Hi!\n> \n> I'm trying to dump and restore my database which is a 6.5 May 2nd\n> snapshot, but psql is barfing on pg_dump's output. Naturally I find that\n> quite disturbing! I'd like to find out how I can salvage my data,\n> because right now I havn't got a way of backing it up properly. pg_dump\n> -D |psql can re-insert my data, but with the loss of oids, and my schema\n> relies on oids. If anyone wants the full pg_dump data let me know.\n> pg_dump -o |psql results in the errors.....\n> \n> The first one, it looks\n> \n> COPY \"urllink\" WITH OIDS FROM stdin;\n> ERROR: pg_atoi: error in \"http://www.photogs.com/bwworld/f5.html\":\n> can't parse\n> \"http://www.photogs.com/bwworld/f5.html\"\n> PQendcopy: resetting connection\n> \n> This was caused by the following input\n> COPY \"urllink\" WITH OIDS FROM stdin;\n> 24265 \\N Review of Nikon F5 \\N \\N \\N 24065\n> http://www.photogs.com/bwworld/f5.html t\n> \n> It looks like maybe postgres is expecting an integer and getting a\n> string maybe?\n> \n> One thing I did which was a little unusual is that I did an ALTER TABLE\n> foo ADD COLUMN, but I should have said ALTER TABLE foo* ADD COLUMN to\n> get the column on inherited attributes. The only solution I could think\n> of was to go and add the attribute to all the sub-classes too. This\n> seemed to work (is this what I should have done?), but I don't know if\n> this might be related to this problem. Maybe postgres is confused now\n> about column orders?? So I wanted desperately to do a pg_dump -D -o, but\n> -D stops -o from working (Yuk! This really need to be fixed!)\n> \n> (Please give us DROP COLUMN soon! :-)\n> \n> The other error looks to be something to do with views...\n> \n> CREATE RULE \"_RETproductv\" AS ON SELECT TO \"productv\" WHERE DO INSTEAD\n> SELECT \"\n> oid\" AS \"oidv\", \"type\", \"title\", \"summary\", \"body\", \"image\", \"category\",\n> \"mfrcod\n> e\", \"mfr\", \"costprice\", \"taxrate\", \"profit\", \"rrprice\", \"taxrate\" *\n> \"costprice\"\n> AS \"tax\", \"costprice\" + \"profit\" AS \"exsaleprice\", \"costprice\" +\n> \"profit\" + \"tax\n> rate\" * \"costprice\" AS \"saleprice\" FROM \"product\";\n> ERROR: parser: parse error at or near \"do\"\n> CREATE RULE \"_RETorderitemv\" AS ON SELECT TO \"orderitemv\" WHERE DO\n> INSTEAD SELE\n> CT \"oid\" AS \"oidv\", \"product\", \"webuser\", \"quantity\", \"price\",\n> \"taxfree\", \"order\n> status\", \"orderdatetime\", \"shipdatetime\", \"price\" * \"quantity\" AS\n> \"totalprice\" F\n> ROM \"orderitem\";\n> ERROR: parser: parse error at or near \"do\"\n",
"msg_date": "Fri, 07 May 1999 05:14:45 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re:pg_dump barfs?"
},
{
"msg_contents": "\nOh yeah, I'm using a fairly complex inheritance hierarchy, so it may be\nrelated to a difference between the order COPY may output fields and the\norder fields may be deemed when re-created via a CREATE TABLE,\nespecially with regard to inheritance and possibly ALTER TABLE ADD\nCOLUMN.\n\nBecause of the complex inheritance, I can't just reorder the columns in\nthe CREATE TABLE of the pg_dump, because it is mostly postgresql which\nis determining field order somehow according to inheritance. In general,\nthe anonymous field nature of COPY seems particularly bad in conjunction\nwith inheritance where field order is determined by the database rather\nthan the user, especially since it seems postgresql doesn't necessarily\nre-create the same order after a pg_dump.\n\nI'm pretty sure that the ALTER TABLE ADD COLUMN is still part of the\nproblem though, because if I re-create the schema from scratch I can\ndump and restore properly. It seems to be my use of ADD COLUMN which has\nmade postgres inconsistent in its column orderings.\n\nChris Bitmead wrote:\n> \n> As a follow-up to this, I tried creating a new database from the\n> original CREATE TABLE statements, with the additional field added to the\n> CREATE TABLE which I had previously used an ALTER TABLE to add.\n> \n> I found that the fields came out in a different order when I do a SELECT\n> * FROM urllink.\n> \n> This re-enforces my theory that postgres is confused about field orders,\n> and that there is a bad interaction between ALTER TABLE ADD COLUMN and\n> any database use which assumes a particular column ordering. In my\n> opinion, any useful SQL must specify columns in order to be reliable\n> (even COPY). Unfortunately, COPY does not allow you to specify column\n> names, and INSERT does not allow you to retain oids, thus I am screwed\n> right now. Any suggestions on how to salvage my data still welcome :-).\n> \n> Chris Bitmead wrote:\n> >\n> > Hi!\n> >\n> > I'm trying to dump and restore my database which is a 6.5 May 2nd\n> > snapshot, but psql is barfing on pg_dump's output. Naturally I find that\n> > quite disturbing! I'd like to find out how I can salvage my data,\n> > because right now I havn't got a way of backing it up properly. pg_dump\n> > -D |psql can re-insert my data, but with the loss of oids, and my schema\n> > relies on oids. If anyone wants the full pg_dump data let me know.\n> > pg_dump -o |psql results in the errors.....\n> >\n> > The first one, it looks\n> >\n> > COPY \"urllink\" WITH OIDS FROM stdin;\n> > ERROR: pg_atoi: error in \"http://www.photogs.com/bwworld/f5.html\":\n> > can't parse\n> > \"http://www.photogs.com/bwworld/f5.html\"\n> > PQendcopy: resetting connection\n> >\n> > This was caused by the following input\n> > COPY \"urllink\" WITH OIDS FROM stdin;\n> > 24265 \\N Review of Nikon F5 \\N \\N \\N 24065\n> > http://www.photogs.com/bwworld/f5.html t\n> >\n> > It looks like maybe postgres is expecting an integer and getting a\n> > string maybe?\n> >\n> > One thing I did which was a little unusual is that I did an ALTER TABLE\n> > foo ADD COLUMN, but I should have said ALTER TABLE foo* ADD COLUMN to\n> > get the column on inherited attributes. The only solution I could think\n> > of was to go and add the attribute to all the sub-classes too. This\n> > seemed to work (is this what I should have done?), but I don't know if\n> > this might be related to this problem. Maybe postgres is confused now\n> > about column orders?? So I wanted desperately to do a pg_dump -D -o, but\n> > -D stops -o from working (Yuk! This really need to be fixed!)\n> >\n> > (Please give us DROP COLUMN soon! :-)\n> >\n> > The other error looks to be something to do with views...\n> >\n> > CREATE RULE \"_RETproductv\" AS ON SELECT TO \"productv\" WHERE DO INSTEAD\n> > SELECT \"\n> > oid\" AS \"oidv\", \"type\", \"title\", \"summary\", \"body\", \"image\", \"category\",\n> > \"mfrcod\n> > e\", \"mfr\", \"costprice\", \"taxrate\", \"profit\", \"rrprice\", \"taxrate\" *\n> > \"costprice\"\n> > AS \"tax\", \"costprice\" + \"profit\" AS \"exsaleprice\", \"costprice\" +\n> > \"profit\" + \"tax\n> > rate\" * \"costprice\" AS \"saleprice\" FROM \"product\";\n> > ERROR: parser: parse error at or near \"do\"\n> > CREATE RULE \"_RETorderitemv\" AS ON SELECT TO \"orderitemv\" WHERE DO\n> > INSTEAD SELE\n> > CT \"oid\" AS \"oidv\", \"product\", \"webuser\", \"quantity\", \"price\",\n> > \"taxfree\", \"order\n> > status\", \"orderdatetime\", \"shipdatetime\", \"price\" * \"quantity\" AS\n> > \"totalprice\" F\n> > ROM \"orderitem\";\n> > ERROR: parser: parse error at or near \"do\"\n",
"msg_date": "Fri, 07 May 1999 07:20:19 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re:pg_dump barfs?"
},
{
"msg_contents": "Thus spake Chris Bitmead\n> \n> Am I right in saying that the -o and -D arguments to pg_dump cannot work\n> together? Any chance of this getting fixed?\n\nI suspect that the problem is that you can't insert an OID into the\nsystem using standard SQL statements but I'm not sure about that. I\ndo know that the following crashed the backend.\n\ndarcy=> insert into x (oid, n) values (1234567, 123.456);\n\n> Otherwise is there any other way of deleting a column from a table\n> whilst retaining oids? In general there seems there are problems with\n> various scheme changes that you may want to do if you need to retain\n> oids. Various SELECT INTO options don't work any more unless there is\n> some way to set the oid in conjunction with named fields (like the -D\n> option).\n\nUltimately I think you need to get away from using OIDs in your top\nlevel applications. Depending on them causes these kinds of problems\nand moves you farther from standard SQL in your app. Use of the OID\n(IMNSHO) should be limited to temporary tracking of rows and even then\nit should be in middle level code, not the top level application. I\noffer the use of OIDs in pg.py in the Python interface as an example\nof middle code.\n\nI suggest that you replace the use of OID in your database with a serial\ntype primary key. That allows you to dump and reload without losing\nthe information and it performs the same function as OID in your code.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 7 May 1999 07:48:32 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "> Ultimately I think you need to get away from using OIDs in your \n> top level applications.\n\nI don't give a rip about standard SQL. What I care about is real object\ndatabases. A fundamental principle of object theory is that objects have\na unique identity. In C++ it is a pointer. In other languages it is a\nreference. In an object database it is an oid. In the NSHO of a fellow\ncalled Stonebraker, you should be using oids for everything.\n\nBTW, I was looking through the original 4.2 docs, and I noted that in\nPostgres 4.2 every class had not only an oid, but an implicit classoid,\nallowing you to identify the type of an object. What happened to this?\nIt would solve just a ton of problems I have, because I'm using a very\nOO data model. It sounds like Postgres used to be a real object\ndatabase. Now everybody seems to want to use it as yet another sucky rdb\nand a lot of essential OO features have undergone bit-rot. What happened\nto building a better mouse trap? \n\nHave a read of shared_object_hierarchy.ps in the original postgres doco\nto see how things should be done. Sorry for the flames, but I used to\nwork for an ODBMS company and I'm passionate about the benefits of\nproperly supporting objects.\n\n Depending on them causes these kinds of problems\n> and moves you farther from standard SQL in your app. Use of the OID\n> (IMNSHO) should be limited to temporary tracking of rows and even then\n> it should be in middle level code, not the top level application. I\n> offer the use of OIDs in pg.py in the Python interface as an example\n> of middle code.\n> \n> I suggest that you replace the use of OID in your database with a serial\n> type primary key. That allows you to dump and reload without losing\n> the information and it performs the same function as OID in your code.\n> \n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Fri, 07 May 1999 12:53:20 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> I don't give a rip about standard SQL. What I care about is real object\n> databases. A fundamental principle of object theory is that objects have\n> a unique identity. In C++ it is a pointer. In other languages it is a\n> reference. In an object database it is an oid. In the NSHO of a fellow\n> called Stonebraker, you should be using oids for everything.\n\nUnfortunately, the implementation within PostgreSQL suffered from both\nbugs and severe logic errors. Further there was no facility for\nmanipulating OIDs (can you say dump/reload?). Thanks to the efforts\nof the PostgreSQL community, many of these items have been fixed, but\nsometimes at a cost to OO.\n\n> BTW, I was looking through the original 4.2 docs, and I noted that in\n> Postgres 4.2 every class had not only an oid, but an implicit classoid,\n> allowing you to identify the type of an object. What happened to this?\n> It would solve just a ton of problems I have, because I'm using a very\n> OO data model. It sounds like Postgres used to be a real object\n> database. Now everybody seems to want to use it as yet another sucky rdb\n> and a lot of essential OO features have undergone bit-rot. What happened\n> to building a better mouse trap? \n\nWe (not really me, but the others who are actually writing code) are\nworking very hard to make PostgreSQL SQL92 compliant and stable.\nFurther, more features are being added all the time. If you want a\nparticular feature set, then get off your butt and contribute some\ncode. When I wanted PostgreSQL to work on my AViiON, I did the\nnecessary work and contributed it back to the community.\n\n> Have a read of shared_object_hierarchy.ps in the original postgres doco\n> to see how things should be done. Sorry for the flames, but I used to\n> work for an ODBMS company and I'm passionate about the benefits of\n> properly supporting objects.\n\nCool. Take your experience and write some code. BTW, you might want\nto notice that document was never a description of how things *really*\nworked in PostgreSQL, only how it was *supposed* to work. We\ninherited some seriously broken, dysfunctional code and have done some\nbeautiful work with it (again, not actually me here). It's a work in\nprogress, and therefore should be looked at by the users as \na) needing work, and\nb) an opportunity to excell, by showing off your talents as you submit\nnew code.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "7 May 1999 09:47:54 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> Thus spake Chris Bitmead\n>> Am I right in saying that the -o and -D arguments to pg_dump cannot work\n>> together? Any chance of this getting fixed?\n\n> I suspect that the problem is that you can't insert an OID into the\n> system using standard SQL statements but I'm not sure about that.\n\nSince COPY WITH OIDS works, I think there's no fundamental reason why\nan INSERT couldn't specify a value for the OID field. Certainly,\npersuading pg_dump to do this would be pretty trivial --- the only\nquestion is whether the backend will accept the resulting script.\nUnfortunately you say:\n\n> I do know that the following crashed the backend.\n> darcy=> insert into x (oid, n) values (1234567, 123.456);\n\nThis is definitely a bug --- it should either do it or give an\nerror message...\n\n> Ultimately I think you need to get away from using OIDs in your top\n> level applications.\n\nI concur fully with this advice. I think it's OK to use an OID as\na working identifier for a record; for example, my apps do lots\nof this:\n\tSELECT oid,* FROM table WHERE ...;\n\tUPDATE table SET ... WHERE oid = 12345;\nBut the OID will be forgotten at app shutdown. I never ever use an\nOID as a key referred to by another database entry (I use serial columns\nfor unique keys). So, I don't have to worry about preserving OIDs\nacross database reloads.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 10:18:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem? "
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> As a follow-up to this, I tried creating a new database from the\n> original CREATE TABLE statements, with the additional field added to the\n> CREATE TABLE which I had previously used an ALTER TABLE to add.\n\n> I found that the fields came out in a different order when I do a SELECT\n> * FROM urllink.\n\n> This re-enforces my theory that postgres is confused about field orders,\n\nI'm actually a tad surprised that ALTER TABLE ADD COLUMN works at all in\nan inheritance context (or maybe the true meaning of your report is that\nit doesn't work). See, ADD COLUMN always wants to *add* the column, at\nthe end of the list of columns for your table. What you had was\nsomething like this:\n\n\tTable\t\tColumns\n\n\tParent\t\tA B C\n\tChild\t\tA B C D E\n\nThen you did ALTER Parent ADD COLUMN F:\n\n\tParent\t\tA B C F\n\tChild\t\tA B C D E\n\nOoops, you should have done ALTER Parent*, so you tried to recover by\naltering the child separately with ALTER Child ADD COLUMN F:\n\n\tParent\t\tA B C F\n\tChild\t\tA B C D E F\n\nDo you see the problem here? Column F is not correctly inherited,\nbecause it is not in the same position in parent and child. If you\ndo something like \"SELECT F FROM Parent*\" you will get D data out of\nthe child table (or possibly even a coredump, if F and D are of\ndifferent datatypes) because the inheritance code presumes that F's\ndefinition in Parent applies to all its children as well. And the\ncolumn's position is part of its definition.\n\nI'd say it is a bug that ALTER TABLE allowed you to do an ADD COLUMN\n(or any other mod for that matter) on Parent without also changing its\nchildren to match. I am not sure whether ADD COLUMN is capable of\nreally working right in an inheritance scenario; it'd have to put the\nnew column in the middle of the existing columns for child tables,\nand I don't know how hard that is. But the system should not accept\na command that makes the parent and child tables inconsistent.\n\nAnyway, to get back to your immediate problem of rebuilding your\ndatabase, the trouble is that once you recreate Parent and Child\nusing correct declarations, they will look like\n\n\tParent\t\tA B C F\n\tChild\t\tA B C F D E\n\nand since the column order of Child is different from before,\na plain COPY won't reload it correctly (neither will an INSERT\nwithout explicit column labels). What I'd suggest doing is\ndumping the old DB with pg_dump -o and then using a sed script\nor a quick little perl program to reorder the fields in the COPY\ndata before you reload.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 10:39:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:pg_dump barfs? "
},
{
"msg_contents": "[email protected] wrote:\n\n> Cool. Take your experience and write some code. BTW, you might want\n> to notice that document was never a description of how things *really*\n> worked in PostgreSQL, only how it was *supposed* to work. \n\nYeah, sorry I didn't want to be critical. I'm grateful of all the great\nwork that's been done to make it a working stable product. I just wanted\nto raise some awareness of what Postgres was originally meant to be.\nI've been following the research being done at Berkeley in early times\nalways hoping that some of the OO features would mature more.\n\nI will try and come to terms with the code to try and add some of these\nfeatures myself, I've just spent a few hours browsing the code, but\nthere is certainly a big learning curve there, especially as the doco is\nminimal. But I'll see what I can do.\n\n> We\n> inherited some seriously broken, dysfunctional code and have done some\n> beautiful work with it (again, not actually me here). It's a work in\n> progress, and therefore should be looked at by the users as\n> a) needing work, and\n> b) an opportunity to excell, by showing off your talents as you submit\n> new code.\n",
"msg_date": "Fri, 07 May 1999 15:12:04 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Ooops, you should have done ALTER Parent*, so you tried to recover by\n> altering the child separately with ALTER Child ADD COLUMN F:\n> \n> Parent A B C F\n> Child A B C D E F\n> \n> Do you see the problem here? Column F is not correctly inherited,\n> because it is not in the same position in parent and child. If you\n> do something like \"SELECT F FROM Parent*\" you will get D data out of\n> the child table (or possibly even a coredump, if F and D are of\n> different datatypes) because the inheritance code presumes that F's\n> definition in Parent applies to all its children as well. \n\nWell, in my brief testing, it appears as if what I did actually works as\nfar as having a working database is concerned. It seemed as if SELECT F\nFROM Parent* actually did the right thing. Sort-of anyway. If I didn't\nadd F to the child, then F seemed to be some random number on a SELECT.\n\n> And the\n> column's position is part of its definition.\n> \n> I'd say it is a bug that ALTER TABLE allowed you to do an ADD COLUMN\n> (or any other mod for that matter) on Parent without also changing its\n> children to match. \n\nI tend to agree. I'd say that you should say table* if table has\nchildren.\n\n> I am not sure whether ADD COLUMN is capable of\n> really working right in an inheritance scenario; it'd have to put the\n> new column in the middle of the existing columns for child tables,\n> and I don't know how hard that is. \n\nI'm pretty sure it does the right thing already, but I havn't done much\ntesting.\n\n> What I'd suggest doing is\n> dumping the old DB with pg_dump -o and then using a sed script\n> or a quick little perl program to reorder the fields in the \n> COPY data before you reload.\n\nOk, I tried that and it worked.\n\nAny thoughts on the other error mesg I had that seemed to be about\nviews? I doesn't seem to have caused any problem.\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Fri, 07 May 1999 15:21:22 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re:pg_dump barfs?"
},
{
"msg_contents": "Then <[email protected]> spoke up and said:\n> I will try and come to terms with the code to try and add some of these\n> features myself, I've just spent a few hours browsing the code, but\n> there is certainly a big learning curve there, especially as the doco is\n> minimal. But I'll see what I can do.\n\nGreat! It's wonderful to see new talent coming on board!\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================",
"msg_date": "7 May 1999 11:25:35 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "I wrote:\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n>> I do know that the following crashed the backend.\n>> darcy=> insert into x (oid, n) values (1234567, 123.456);\n\n> This is definitely a bug --- it should either do it or give an\n> error message...\n\nActually, with recent sources you get:\n\nregression=> insert into x (oid, n) values (1234567, 123.456);\nERROR: Cannot assign to system attribute 'oid'\n\nI had put in a patch to defend against \"UPDATE table SET oid = ...\",\nand it evidently catches the INSERT case too.\n\nI am not sure how much work it would take to actually accept an INSERT/\nUPDATE that sets the OID field. There is a coredump in the parser if\nyou take out the above check; it wouldn't be hard to fix that coredump\nbut I haven't looked to see what else may lurk beyond it.\n(preprocess_targetlist is a danger zone that comes to mind.)\n\nAnyway, this definitely looks like a \"new feature\" that is not going to\nget done for 6.5. Perhaps someone will get interested in making it work\nfor 6.6 or later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 17:16:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem? "
},
{
"msg_contents": "> and since the column order of Child is different from before,\n> a plain COPY won't reload it correctly (neither will an INSERT\n> without explicit column labels). What I'd suggest doing is\n> dumping the old DB with pg_dump -o and then using a sed script\n> or a quick little perl program to reorder the fields in the COPY\n> data before you reload.\n\nGood summary. Another idea is to create temp uninherited copies of the\ntables using SELECT A,B INTO TABLE new FROM ... and make the orderings\nmatch, delete the old tables, recreate with inheritance, and do INSERT\n.. SELECT, except you say you can't load oids. Oops, that doesn't help.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 May 1999 18:58:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:pg_dump barfs?"
},
{
"msg_contents": "> I will try and come to terms with the code to try and add some of these\n> features myself, I've just spent a few hours browsing the code, but\n> there is certainly a big learning curve there, especially as the doco is\n> minimal. But I'll see what I can do.\n> \n> > We\n> > inherited some seriously broken, dysfunctional code and have done some\n> > beautiful work with it (again, not actually me here). It's a work in\n> > progress, and therefore should be looked at by the users as\n> > a) needing work, and\n> > b) an opportunity to excell, by showing off your talents as you submit\n> > new code.\n\nMost of us are not walking away from OID's. We want them to work 100%\nof the time. Also, make sure you read the backend flowchard and\ndevelopers FAQ on the docs page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 May 1999 18:59:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "Using May 2nd snapshot...\n\nIf I do a pg_dump <database> | psql <newdatabase>\n\nAny datetime fields are different. I think it's a timezone problem. I\nthink pg_dump is dumping in local time, and psql is interpreting it as\nGMT.\n\nThe dump includes the timezone as part of the dump, so I'm guessing that\nthe problem is on the part of psql not noticing that. I'm using the\nAustralian \"EST\" zone if that's useful.\n\nIs there an immediate work-around?\n",
"msg_date": "Sat, 08 May 1999 09:12:20 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "\nI guess one thing I'm frustrated about is that I'm ready willing and\nable to write an ODMG compliant interface, which is chiefly a client\nside exercise, but I've been kind of hanging out looking for postgres to\nget one or two backend features necessary to make that happen. Ok, I'm\ngoing to try and figure out how to do it myself.\n\nQ1. I need to have a virtual field which describes the class membership. \n\nSo I want to be able to find the class name of various objects by doing\nsomething like\nSELECT relname FROM person*, pg_class where person.classoid =\npg_class.oid;\nrelname \n-------------------------------\nperson\nemployee\nstudent\nempstudent\nperson\nstudent\n(6 rows)\n\nSo the critical thing I need here is the imaginary field \"classoid\".\nPostgres knows obviously which relation a particular object belongs to.\nThe question is how to turn this knowledge into an imaginary field that\ncan be queried.\n\nCan anybody point me to which areas of the backend I need to be looking\nto implement this? I see that there is a data structure called\n\"Relation\" which has an oid field which is the thing I think I need to\nbe grabbing, but I'm not sure how to make this all come together. \n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 08 May 1999 09:32:22 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "ODMG interface"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Q1. I need to have a virtual field which describes the class membership. \n\n> So I want to be able to find the class name of various objects by doing\n> something like\n> SELECT relname FROM person*, pg_class where person.classoid =\n> pg_class.oid;\n\nI am not sure what you mean by \"class membership\" here. There is type\ninformation for each column of every relation in pg_attribute and\npg_type. There is also a pg_type entry for each relation, which can be\nthought of as the type of the rows of the relation. The query you show\nabove looks like maybe what you really want to get at is the inheritance\nhierarchy between relations --- if so see pg_inherits.\n\nI suspect that whatever you are looking for is already available in the\nsystem tables, but I'm not quite certain about what semantics you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 May 1999 11:26:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ODMG interface "
},
{
"msg_contents": "\nWhat I want is that when I get objects back from multiple relations\n(usually because of inheritance using \"*\" although I guess it could be a\nunion too), is to know the name of the relation (or class) from which\nthat object came.\n\nSo if I do a select * from person*, some of the resulting rows will have\ncome from person objects, but some may have come from employee objects,\nothers from the student relation.\n\nSo the query...\nSELECT relname FROM person*, pg_class where person.classoid =\npg_class.oid;\n\ndoes a join between a particular inheritance hierarchy (person in this\ncase), and the pg_class system table which contains a string name for\neach relation.\n\nIn an ODMG interface library, what would really happen is at startup I\nwould find all the classes available from the system tables and cache\ntheir structure. Then some application using the ODMG library would, \nlet's say it's C++, execute something like...\n\nList<Person> = query(\"SELECT oid, classoid, * FROM person*\");\nand get a C++ array of objects, some of which may be Student objects\nsome of which may Employee objects etc. The internals of the ODMG\nlibrary would figure out which results were students and which were\nemployees by the classoid attribute of each resulting row and\ninstantiate the appropriate type of class.\n\nThe way I think this should probably be done is by having each row in\nthe entire database have an imaginary attribute called classoid which is\nthe oid of the class to which that object belongs.\n\nIn my own application right now, I actually have a real attribute called\n(class oid) in a common base class, which is a foreign key into the\npg_class system table. This is wasteful and potentially error prone\nthough, since postgres knows which tables the rows came from (since each\nrelation is stored in a different file).\n\nI don't think this can be done now within postgresql. Do you see what I\nmean?\n\nTom Lane wrote:\n> I am not sure what you mean by \"class membership\" here. There is type\n> information for each column of every relation in pg_attribute and\n> pg_type. There is also a pg_type entry for each relation, which can be\n> thought of as the type of the rows of the relation. The query you show\n> above looks like maybe what you really want to get at is the inheritance\n> hierarchy between relations --- if so see pg_inherits.\n> \n> I suspect that whatever you are looking for is already available in the\n> system tables, but I'm not quite certain about what semantics you want.\n> \n> regards, tom lane\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Sat, 08 May 1999 16:03:26 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ODMG interface"
},
{
"msg_contents": "> Any datetime fields are different. I think it's a timezone problem.\n> The dump includes the timezone as part of the dump, so I'm guessing that\n> the problem is on the part of psql not noticing that. I'm using the\n> Australian \"EST\" zone if that's useful.\n> Is there an immediate work-around?\n\nYeah, move to the east coast of the US :)\n\nEST is the US-standard designation for \"Eastern Standard Time\" (5\nhours off of GMT). If you compile your backend with the flag\n-DUSE_AUSTRALIAN_RULES=1 you will instead get this to match the\nAustralian convention, but will no longer handle the US timezone of\ncourse.\n\nThis is used in backend/utils/adt/dt.c, and is done with an #if rather\nthan an #ifdef. Perhaps I should change that...\n\nbtw, Australia has by far the largest \"timezone space\" I've ever seen!\nThere are 17 Australia-specific timezones supported by the Postgres\nbackend. I know it's a big place, but the \"timezone per capita\" leads\nthe world ;)\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 16:55:43 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Q1. I need to have a virtual field which describes the class membership.\n> \n> > So I want to be able to find the class name of various objects by doing\n> > something like\n> > SELECT relname FROM person*, pg_class where person.classoid =\n> > pg_class.oid;\n> \n> I am not sure what you mean by \"class membership\" here. There is type\n> information for each column of every relation in pg_attribute and\n> pg_type. There is also a pg_type entry for each relation, which can be\n> thought of as the type of the rows of the relation. The query you show\n> above looks like maybe what you really want to get at is the inheritance\n> hierarchy between relations --- if so see pg_inherits.\n> \n> I suspect that whatever you are looking for is already available in the\n> system tables, but I'm not quite certain about what semantics you want.\n\nThere is currently no (fast) way to go from oid to the relation\ncontaining \nthat oid.\n\nthe only way seems to find all relations that inherit from the base and\ndo\n\nselect * from base_or_derived_relation where oid=the_oid_i_search_for;\n\nuntil you get back the row.\n\nI would propose a pseudo column (or funtion) so that one could do:\n\nselect rowrelname() as class_name, * from person*;\n\nand then work from there on.\nUnfortunately I am too ignorant on the internals to implement it ;(\n\n-------------\nHannu\n",
"msg_date": "Sun, 09 May 1999 13:56:24 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ODMG interface"
},
{
"msg_contents": "\n> until you get back the row.\n> \n> I would propose a pseudo column (or funtion) so that one could do:\n> \n> select rowrelname() as class_name, * from person*;\n> \n> and then work from there on.\n\nBasicly that's what I want to implement, except that instead of\nreturning the relname() I think the rel_classoid (oid of pg_class) is a\nbetter choice. Then obtaining the relname a simple join with pg_class.\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Sun, 09 May 1999 12:03:16 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ODMG interface"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> I would propose a pseudo column (or funtion) so that one could do:\n>> select rowrelname() as class_name, * from person*;\n>> and then work from there on.\n\n> Basicly that's what I want to implement, except that instead of\n> returning the relname() I think the rel_classoid (oid of pg_class) is a\n> better choice. Then obtaining the relname a simple join with pg_class.\n\nOK, I'm starting to get the picture, and I agree there's no way to get\nthe system to give you this info now. (You could store a user field\nthat provides the same info, of course, but that's kind of ugly.)\n\nI think you'd have to implement it as a system attribute (like oid,\nxid, etc) rather than as a function, because in a join scenario you\nneed to be able to indicate which tables you are talking about.\nFor example, to find men with wives named Sheila in your database:\n\nselect p1.classoid, p1.firstname, p1.lastname\nfrom person* as p1, person* as p2\nwhere p1.spouse = p2.oid and p2.firstname = 'Sheila';\n\nIf it were \"select classoid(), ...\" then you'd have no way to indicate\nwhich person's classoid you wanted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 13:21:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ODMG interface "
},
{
"msg_contents": "\nAdded to TODO list.\n\n\n\n> \n> Am I right in saying that the -o and -D arguments to pg_dump cannot work\n> together? Any chance of this getting fixed?\n> \n> Otherwise is there any other way of deleting a column from a table\n> whilst retaining oids? In general there seems there are problems with\n> various scheme changes that you may want to do if you need to retain\n> oids. Various SELECT INTO options don't work any more unless there is\n> some way to set the oid in conjunction with named fields (like the -D\n> option).\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 13:24:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump problem?"
},
{
"msg_contents": "\nHi guys,\n\nI've had a long discussion with the timezone people about this time zone\nabbreviation issue.\n\nIn their words, the way Postgres works is broken :-(\n\nWhile to us mere mortals it may appear sensible that zone designations\nare unique, this is apparently not the case, and this is not unique to\nAustralia. Any code which relies on them being unique is designated\n\"broken\".\n\nI argued strongly that timezones abbreviations should be changed to be\nunique, but without a great deal of success, partly because (a) that's\njust the way it is (b) it's based on official government of local areas\nand (c) there's no reason to change them.\n\nI personally disagree, but I wouldn't be holding my breath for anything\nto change on that front.\n\nSo according to them, the way postgres should work is that it should\ndump times with a time and a specific UT offset, as in 10:00am UT-10 for\nexample.\n\nI'm not 100% sure why Postgres has a lot of code for timezone issues\ncurrently. I'm guessing that Postgres is trying to work around this\nzoneinfo ``problem'' by recognising say \"AEST\" in lieu of australia's\nEST zone. But unless you're going to do a proper job of it and also\noutput \"AEST\" on postgres dumps, it seems like a futile thing.\n\nThe other option would be to dump the full locale name, like instead of\noutputing \"EST\", output \"Australia/Sydney\" which is the full name for\nthat locale. Unfortunately I don't think there's a portable way of\ngetting that information on different systems, and also it's rather\nwordy output.\n\nSo basicly the timezone experts are saying that the time zone abbrevs\nare useless and this problem is not just limited to Australia. It looks\nto me then like Postgres should stop outputting timezone abbrevs and\nstart outputting UT offsets. The argument is that without any timezone -\nwell that just means local time. If you do specify a timezone it should\nbe the full locale name - as in Australia/Sydney.\n\nThere are several other arguments. For example some areas sometimes\nchange their zone. Apparently the state of Georgia (?) once changed the\nzone they are in. In such a case Georgia would need their own locale\nfile. To output dates using the generic abbreviation could be incorrect.\n\nThe other thing that occurs to me is that I don't know what would happen\nin that phantom hour once a year when you change over to summer time (or\nwas it when you change back). UT offsets solve this, I'm not sure if\nanybody has solved it for abbrevs.\n\nTimezones are a lot more complex than they look, and I'd like to\nunderstand more about how Postgres regards them. Does anybody else have\nany thoughts on this?\n\n\n\nThomas Lockhart wrote:\n> \n> > Any datetime fields are different. I think it's a timezone problem.\n> > The dump includes the timezone as part of the dump, so I'm guessing that\n> > the problem is on the part of psql not noticing that. I'm using the\n> > Australian \"EST\" zone if that's useful.\n> > Is there an immediate work-around?\n> \n> Yeah, move to the east coast of the US :)\n> \n> EST is the US-standard designation for \"Eastern Standard Time\" (5\n> hours off of GMT). If you compile your backend with the flag\n> -DUSE_AUSTRALIAN_RULES=1 you will instead get this to match the\n> Australian convention, but will no longer handle the US timezone of\n> course.\n> \n> This is used in backend/utils/adt/dt.c, and is done with an #if rather\n> than an #ifdef. Perhaps I should change that...\n> \n> btw, Australia has by far the largest \"timezone space\" I've ever seen!\n> There are 17 Australia-specific timezones supported by the Postgres\n> backend. I know it's a big place, but the \"timezone per capita\" leads\n> the world ;)\n> \n> - Tom\n> \n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 11 May 1999 11:19:41 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "> In their words, the way Postgres works is broken :-(\n\n... as is the rest of the world :)\n\n> So according to them, the way postgres should work is that it should\n> dump times with a time and a specific UT offset, as in 10:00am UT-10 for\n> example.\n\nUse the ISO format setting, and you'll be a happy camper:\n\npostgres=> set datestyle='iso';\nSET VARIABLE\npostgres=> select datetime 'now';\n?column? \n----------------------\n1999-05-11 07:20:30-07\n(1 row)\n\npostgres=> show time zone;\nNOTICE: Time zone is PST8PDT\nSHOW VARIABLE\n\n\n> I'm not 100% sure why Postgres has a lot of code for timezone issues\n> currently. I'm guessing that Postgres is trying to work around this\n> zoneinfo ``problem'' by recognising say \"AEST\" in lieu of australia's\n> EST zone. But unless you're going to do a proper job of it and also\n> output \"AEST\" on postgres dumps, it seems like a futile thing.\n\nWe rely on the OS to provide timezone offsets for *output*, so we\ndon't have to figure out how to do daylight savings time (and for\nother reasons). There is no standard interface to do the same thing\nfor input outside of Unix system time, so we do it ourself for input.\nAnd there is no standard interface to get direct access to the\ntimezone database itself. If'n you don't like the output conventions\nfor your system, do your own timezone database or learn to like it ;)\n\n> The other thing that occurs to me is that I don't know what would happen\n> in that phantom hour once a year when you change over to summer time (or\n> was it when you change back). UT offsets solve this, I'm not sure if\n> anybody has solved it for abbrevs.\n\n? Since you would be relying on a timezone database for interpretation\nof the abbrevs, you might run the risk of dissimilar systems doing\nthings inconsistantly. And we've seen lots of differences on Unix\nboxes once you start dealing with times before 1960 or so (those damn\nkids doing development nowadays :) Sun does a great job (you can learn\na bit of history looking at their timezone database) while some other\nsystems don't bother trying. The zic utilities used by Linux and some\nother systems do a pretty good job, but are not as rigorous as Sun's\ndatabase.\n\n> Timezones are a lot more complex than they look, and I'd like to\n> understand more about how Postgres regards them. Does anybody else have\n> any thoughts on this?\n\nUh, sure!\n\nAnyway, your observations are correct, but we are trying to work in\nthe real world, which doesn't seem much interested in going\nexclusively toward the ISO-8601 date/time representation. But we do\nsupport it, and I've toyed with making it the default format. Maybe\nfor postgres-7.0. In the meantime you can build your server to use it\nby default, you can fire up your server with PGDATESTYLE defined, or\nyou can set PGDATESTYLE for any client using libpq.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 14:42:13 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> In their words, the way Postgres works is broken :-(\n\nThomas will have to provide the detailed response to this, but as far\nas I've seen there is nothing \"broken\" about Postgres' handling of\ntimezones. You're assuming that portability of dump scripts across\nlocales is more important than showing dates in the style(s) people\nwant to read ... in the real world that isn't so.\n\n> So according to them, the way postgres should work is that it should\n> dump times with a time and a specific UT offset, as in 10:00am UT-10 for\n> example.\n\nSET DATESTYLE = 'ISO'.\n\n(It might be a worthwhile idea for pg_dump to use this datestyle always,\nsince indeed some of the other ones are locale-dependent. Comments?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 10:43:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > In their words, the way Postgres works is broken :-(\n> \n> Thomas will have to provide the detailed response to this, but as far\n> as I've seen there is nothing \"broken\" about Postgres' handling of\n> timezones. You're assuming that portability of dump scripts across\n> locales \n\nnot across locales, within the same locale!\n\n> is more important than showing dates in the style(s) people\n> want to read ... in the real world that isn't so.\n\nWell I'm not assuming it, it is the timezone database which assumes it.\nAlso the problem is not \"across locales\", but rather within a single\nlocale. Like if someone installs a standard RedHat system with Postgres\nand starts using it, depending on where they are in the world it may not\nfunction correctly.\n\nAs far as people seeing dates in the \"style they want to read\", the\ntimezone people made the not-unreasonable observation that if you just\nwant to see local-time, you shouldn't show any zone at all. Only when\nyou are not talking about the current zone should you show something\nspecific. Given that zone ids are not unique that sounds reasonable. As\nI said, I think they should be unique, but they're not.\n\nOk, you have the AUSTRALIAN_RULES compilation option, so people over\nhere have to rebuild the whole of postgres from scratch. Doesn't worry\nme, but a lot of people don't want to have to bother with that.\n\nAlso there are probably some other locales in the world with the same\nproblem that you havn't considered yet.\n\n\n> \n> > So according to them, the way postgres should work is that it should\n> > dump times with a time and a specific UT offset, as in 10:00am UT-10 for\n> > example.\n> \n> SET DATESTYLE = 'ISO'.\n> \n> (It might be a worthwhile idea for pg_dump to use this datestyle always,\n> since indeed some of the other ones are locale-dependent. Comments?)\n> \n> regards, tom lane\n",
"msg_date": "Tue, 11 May 1999 23:53:21 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > In their words, the way Postgres works is broken :-(\n> \n> ... as is the rest of the world :)\n\nYep :-)\n\n> Use the ISO format setting, and you'll be a happy camper:\n> \n> postgres=> set datestyle='iso';\n\nOk. I think though that you should consider making it the default,\nsimply because something that always works is a good default. Something\nthat only sometimes works is not a very good default.\n\n> We rely on the OS to provide timezone offsets for *output*, \n> so we\n> don't have to figure out how to do daylight savings time \n> (and for\n> other reasons). There is no standard interface to do the same thing\n> for input outside of Unix system time, so we do it ourself > for input.\n\nThat might be ok if what comes out of the database works when you stick\nit back in. Like you accept AEST as australian eastern standard time as\ninput. But if you don't print AEST on output then it's inconsistent. I\nthink the output should be either no time zone info, the full locale\n(\"Australia/Sydney\") or UT offset since they will always work.\n\nI'm not sure what you mean when you say there is no standard interface\nto input times. Various combinations of setenv(\"TZ=\"), mktime() etc etc\nseem to be able to do everything one would need in my experience.\n\n> And there is no standard interface to get direct access to > the timezone database itself. If'n you don't like the \n> output conventions for your system, do your own timezone \n> database or learn to like it ;)\n\nI'm not sure why you would require any more interface than\nmktime(),localtime() and friends. The only thing I can think of is to\nhave a list of the valid locales but that's a different problem.\n\n> > The other thing that occurs to me is that I don't know what would happen\n> > in that phantom hour once a year when you change over to summer time (or\n> > was it when you change back). UT offsets solve this, I'm not sure if\n> > anybody has solved it for abbrevs.\n> \n> ? Since you would be relying on a timezone database for interpretation\n> of the abbrevs, you might run the risk of dissimilar systems doing\n> things inconsistantly. \n\nWhat happens for those times that occur twice? Like if the clocks go\nback 1 hour at 3:00am on a particular day, then that time happens twice.\nIn other words 3/3/1999 2:30am EST may be an ambigous time because that\ntime occurs twice. How is that handled?\n",
"msg_date": "Wed, 12 May 1999 00:12:26 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "> \n> What happens for those times that occur twice? Like if the clocks go\n> back 1 hour at 3:00am on a particular day, then that time happens twice.\n> In other words 3/3/1999 2:30am EST may be an ambigous time because that\n> time occurs twice. How is that handled?\n\nActually, not. The first time 2:30am occurs, it's EST, the second time, its\nEDT. Ambiguity only occurs if you present local time without a timezone. :-(\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 12 May 1999 10:19:45 -0500 (CDT)",
"msg_from": "[email protected] (Ross J. Reedstrom)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Date/Time Flaw in pg_dump ?"
},
{
"msg_contents": "\nI want to stay up to date with all the latest changes. Is it possible to\nget read CVS access?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Fri, 14 May 1999 11:31:57 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "CVS"
},
{
"msg_contents": "On Fri, 14 May 1999, Chris Bitmead wrote:\n\n> I want to stay up to date with all the latest changes. Is it possible to\n> get read CVS access?\n\nexport CVSROOT=\":pserver:[email protected]:/cvs/gnome\"\necho \"Just press <enter>:\"\ncvs login\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n",
"msg_date": "Fri, 14 May 1999 07:38:05 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "On Fri, 14 May 1999, Todd Graham Lewis wrote:\n\n> On Fri, 14 May 1999, Chris Bitmead wrote:\n> \n> > I want to stay up to date with all the latest changes. Is it possible to\n> > get read CVS access?\n> \n> export CVSROOT=\":pserver:[email protected]:/cvs/gnome\"\n> echo \"Just press <enter>:\"\n> cvs login\n\nWoops! Wrong list! Hang on a sec...\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n",
"msg_date": "Fri, 14 May 1999 07:38:26 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "On Fri, 14 May 1999, Todd Graham Lewis wrote:\n\n> On Fri, 14 May 1999, Chris Bitmead wrote:\n> \n> > I want to stay up to date with all the latest changes. Is it possible to\n> > get read CVS access?\n> \n> export CVSROOT=\":pserver:[email protected]:/cvs/gnome\"\n> echo \"Just press <enter>:\"\n> cvs login\n\nexport CVSROOT=\":pserver:[email protected]:/usr/local/cvsroot\"\necho \"Password is \\\"postgresql\\\" \"\ncvs -d :pserver:[email protected]:/usr/local/cvsroot login\n\nThis was supposed to have been put on the web page, as I recall...\n\n--\nTodd Graham Lewis Postmaster, MindSpring Enterprises\[email protected] (800) 719-4664, x22804\n\n\"A pint of sweat will save a gallon of blood.\" -- George S. Patton\n\n",
"msg_date": "Fri, 14 May 1999 07:43:23 -0400 (EDT)",
"msg_from": "Todd Graham Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS"
},
{
"msg_contents": "\nI want to try and really really understand the system catalogs as a\nprelude to figuring out how to make some enhancements.\n\nI've read everything in the doco about them (which isn't much that I can\nsee). Is there anything else? Does it say somewhere what all the fields\nmean? I'm particularly interested in the basic catalogs - classes,\nattributes, types etc.\n\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Fri, 14 May 1999 12:22:42 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "System Catalogs"
},
{
"msg_contents": "\nCan somebody explain briefly what happens when you do an ALTER TABLE ADD\nCOLUMN? Obviously it doesn't seem to go through the database and update\nevery record with a new attribute there and then. Does it get updated\nthe next time the record is retrieved or what is the story there?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Fri, 14 May 1999 12:34:15 +0000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "ADD COLUMN"
},
{
"msg_contents": "I found that the best way to figure out the system catalogs was to do\nthe following steps\n\n(i) Stare at the diagram in the html web pages until you are cross-eyed.\n\n(ii) Look through the .h files in src/include/catalogs/ realising of\ncourse that many of the fields/attributes that are defined are not used.\n\n(iii) Use the \\t command in a test database to inspect the actual\ntables,\nand try doing a bunch of SELECT queries with joins across catalogs to\nfigure out the relational structure (Schema).\n\nSeriously, its not that bad once you get into the groove. \n\nOne interesting feature that I stumbled on was that at least one of the\nmethods that is required for the definition of indices requires more\nthan \n8 arguments, the maximum number for a poastgres function if it is\nentered with a CREATE FUNCTION command. This means that if you wish to\ndynamically load a new type of index you have to use INSERT INTO\npg_proc commands to enter the index methods straight into the catalog\ntable.\n\nBernie\n",
"msg_date": "Fri, 14 May 1999 13:37:44 +0000",
"msg_from": "Bernard Frankpitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] System Catalogs"
},
{
"msg_contents": "Todd Graham Lewis <[email protected]> writes:\n> [ CVS access info ]\n> This was supposed to have been put on the web page, as I recall...\n\nIt *is* on the webpage --- I put it there myself. You can find this and\nother FAQ documents off http://www.postgresql.org/docs/. (I do need to\nupdate the CVS page, which still recommends cvs 1.9...)\n\n<rant>\nThe \"new improved\" website design has made it a lot harder to find\nanything useful, IMHO. For instance, it is not an improvement that\nthe FAQ docs are two levels down in a non-obvious place. The way\nthat the frames-based design makes it impossible to bookmark anything\nonce you have managed to find it just adds insult to injury.\n</rant>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 May 1999 10:38:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS "
},
{
"msg_contents": "On Fri, 14 May 1999, Tom Lane wrote:\n\n> Todd Graham Lewis <[email protected]> writes:\n> > [ CVS access info ]\n> > This was supposed to have been put on the web page, as I recall...\n> \n> It *is* on the webpage --- I put it there myself. You can find this and\n> other FAQ documents off http://www.postgresql.org/docs/. (I do need to\n> update the CVS page, which still recommends cvs 1.9...)\n> \n> <rant>\n> The \"new improved\" website design has made it a lot harder to find\n> anything useful, IMHO. For instance, it is not an improvement that\n> the FAQ docs are two levels down in a non-obvious place. The way\n> that the frames-based design makes it impossible to bookmark anything\n> once you have managed to find it just adds insult to injury.\n> </rant>\n\nDmitry and Vince are working on the new one that Dmitry prototyped...not\nsure what the escheduale is for getting that up though...\n\nIts still a work in progress, but it can be seen at\nhttp://www.postgresql.org/proto ... submit comments on what you do/dont'\nlike... let them know while they are still working on it ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 14 May 1999 13:09:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CVS "
},
{
"msg_contents": "> \n> I want to try and really really understand the system catalogs as a\n> prelude to figuring out how to make some enhancements.\n> \n> I've read everything in the doco about them (which isn't much that I can\n> see). Is there anything else? Does it say somewhere what all the fields\n> mean? I'm particularly interested in the basic catalogs - classes,\n> attributes, types etc.\n\nSee src/include/catalog. There is a doc/src/graphics/catalog.gif, and\ncontrib/pginterface has a utility to find all joins between tables using\noids.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 04:54:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] System Catalogs"
},
{
"msg_contents": "> \n> Can somebody explain briefly what happens when you do an ALTER TABLE ADD\n> COLUMN? Obviously it doesn't seem to go through the database and update\n> every record with a new attribute there and then. Does it get updated\n> the next time the record is retrieved or what is the story there?\n\nNULL fields take up no space in rows, so adding NULL to the end of a row\nreally doesn't change the row, you just tell the catalog the column\nexists, and the system sees a NULL there by default.\n\nOn updates, it remains the same unless you put something in the column.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 04:55:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ADD COLUMN"
},
{
"msg_contents": "\nHi guys. I was trying to add a column to a class again. The class is low\ndown in an inheritance hierarchy.\n\nThis time, I remembered to add the * after the table name, so I thought\nthat I was ok. Everything seemed ok, and the database went on working as\nexpected for ages.\n\nThen one day I had to restore my database and I found again that pg_dump\ndoesn't work with\nERROR: pg_atoi: error in \"1999-05-10 16:27:40+10\": can't parse \"-05-10\n16:27:40+10\"\n\nbecause I think it dumps columns in the wrong order.\n\nFortunately I was able to restore the database by abandoning that column\nand removing it from the table definition. Fortunately I didn't have\nmuch data in that column that was too much loss to lose (yet).\n\nI know I mentioned this problem before, but I thought it was because I\nhad forgotten the \"*\" on the ALTER TABLE ADD COLUMN statement. Now I\nrealise that even when you remember it, you can be bitten. Worse, you\ncan be bitten much later after you've forgotten what was the cause.\n\nI'm not sure what to do now. I really do need to add that extra column.\nIf I thought really really hard, I might be able to figure out how to do\nit with Perl, re-arrangement of columns etc. But I've got a lot of\ntables and it sounds all too hard. The frustrating thing is that adding\nthe columns actually works. It's just that it can't be restored properly\nafter a catastrophy.\n\n\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 01 Jun 1999 21:10:17 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE ADD COLUMN"
},
{
"msg_contents": "\nI'm convinced that pg_dump / psql restore doesn't seem to restore VIEWs\nproperly. Anybody else seen this?\n\n-- \nChris Bitmead\nhttp://www.bigfoot.com/~chris.bitmead\nmailto:[email protected]\n",
"msg_date": "Tue, 01 Jun 1999 21:13:13 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump"
},
{
"msg_contents": "Chris Bitmead wrote:\n\n>\n>\n> I'm convinced that pg_dump / psql restore doesn't seem to restore VIEWs\n> properly. Anybody else seen this?\n\n More details please!\n\n There must be something wrong in the rule utilities when\n backparsing the views CREATE RULE statement. I need the\n definition of the view, the underlying tables and the\n (schema) output of pg_dump to track it down.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 1 Jun 1999 14:20:35 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump"
},
{
"msg_contents": "Jan Wieck wrote:\n\n> > I'm convinced that pg_dump / psql restore doesn't seem to restore VIEWs\n> > properly. Anybody else seen this?\n> \n> More details please!\n\nIt seems to be extremely easy to reproduce...\n\nchris=> create table foo(a int4, b int4);\nCREATE\nchris=> insert into foo values(3, 4);\nINSERT 1484426 1\nchris=> create view bar as SELECT a + b FROM foo;\nCREATE\nchris=> select * from bar;\n?column?\n--------\n 7\n(1 row)\n\nEOFis=> \nchris@tech!26!bash:~$ pg_dump chris -o >foo\nchris@tech!27!bash:~$ createdb foobar\nchris@tech!28!bash:~$ psql !$ <foo\npsql foobar <foo\nCREATE TABLE pgdump_oid (dummy int4);\nCREATE\nCOPY pgdump_oid WITH OIDS FROM stdin;\nDROP TABLE pgdump_oid;\nDROP\nCREATE TABLE \"foo\" (\n \"a\" int4,\n \"b\" int4);\nCREATE\nCREATE TABLE \"bar\" (\n \"?column?\" int4);\nCREATE\nCOPY \"foo\" WITH OIDS FROM stdin;\nCREATE RULE \"_RETbar\" AS ON SELECT TO \"bar\" WHERE DO INSTEAD SELECT \"a\"\n+ \"b\" F\nROM \"foo\";\nERROR: parser: parse error at or near \"do\"\nEOF\nchris@tech!29!bash:~$ psql foobar\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n[PostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3]\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: foobar\n\nfoobar=> select * from foo;\na|b\n-+-\n3|4\n(1 row)\n\nfoobar=> select * from bar;\n?column?\n--------\n(0 rows)\n\nfoobar=>\n",
"msg_date": "Tue, 01 Jun 1999 22:37:13 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pg_dump"
},
{
"msg_contents": "> Fortunately I was able to restore the database by abandoning that column\n> and removing it from the table definition. Fortunately I didn't have\n> much data in that column that was too much loss to lose (yet).\n> \n> I know I mentioned this problem before, but I thought it was because I\n> had forgotten the \"*\" on the ALTER TABLE ADD COLUMN statement. Now I\n> realise that even when you remember it, you can be bitten. Worse, you\n> can be bitten much later after you've forgotten what was the cause.\n> \n> I'm not sure what to do now. I really do need to add that extra column.\n> If I thought really really hard, I might be able to figure out how to do\n> it with Perl, re-arrangement of columns etc. But I've got a lot of\n> tables and it sounds all too hard. The frustrating thing is that adding\n> the columns actually works. It's just that it can't be restored properly\n> after a catastrophy.\n\nOur TODO now has: \n\n\t* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n\nI don't think any of us understand the issues on this one.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 1 Jun 1999 10:36:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ALTER TABLE ADD COLUMN"
},
{
"msg_contents": "Chris Bitmead wrote:\n\n>\n> Jan Wieck wrote:\n>\n> > > I'm convinced that pg_dump / psql restore doesn't seem to restore VIEWs\n> > > properly. Anybody else seen this?\n> >\n> > More details please!\n>\n> It seems to be extremely easy to reproduce...\n> [...]\n> CREATE RULE \"_RETbar\" AS ON SELECT TO \"bar\" WHERE DO INSTEAD SELECT \"a\"\n ^^^^^^^\n\n I've fixed that at 1999/05/25 08:49:33. Update your sources\n and do a clean build.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 1 Jun 1999 16:40:47 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Our TODO now has:\n> \n> * ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> \n> I don't think any of us understand the issues on this one.\n\nLet me guess at the problem. When you add a column, it doesn't change\nall the records, therefore the column must be added at the end. This\nmeans that the columns will not be in the same order as if you had\ncreated them from scratch.\n\nThere seem to be three solutions:\na) Go to a much more sophisticated schema system, with versions and\nversion numbers (fairly hard but desirable to fix other schema change\nproblems). Then insert the column in the position it is supposed to be\nin.\n\nb) Fix the copy command to input and output the columns, not in the\norder they are in, but in the order they would be in on re-creation.\n\nc) make the copy command take arguments specifying the field names, like\nINSERT can do.\n\nI think it would be good if Postgres had all 3 features. Probably (b) is\nthe least work.\n",
"msg_date": "Wed, 02 Jun 1999 10:50:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ALTER TABLE ADD COLUMN"
},
{
"msg_contents": "\nDoes the following indicate a bug in LIKE ? Using CVS from about a week\nago.\n\n=>select oid,title from category* where title like 'Sigma%';\noid|title\n---+-----\n(0 rows)\n\n=>select oid,title from category* where title like 'Sigma';\n oid|title\n-----+-----\n21211|Sigma\n(1 row)\n",
"msg_date": "Mon, 07 Jun 1999 17:22:44 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in LIKE ?"
},
{
"msg_contents": "On Mon, 7 Jun 1999, Chris Bitmead wrote:\n\n> \n> Does the following indicate a bug in LIKE ? Using CVS from about a week\n> ago.\n> \n> =>select oid,title from category* where title like 'Sigma%';\n\nIf I understand this correctly, IMHO, this would be asking for '^Sigma'\nwith at least one character after the 'a' ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 7 Jun 1999 04:35:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "\n> If I understand this correctly, IMHO, this would be asking for '^Sigma'\n> with at least one character after the 'a' ...\n\nUhm.... I think the problem is a little worse:\n\ncreate table a ( b varchar(32) );\ninsert into a values ( 'foo' );\ninsert into a values ( 'bar' );\ninsert into a values ( 'foobar' );\ninsert into a values ( 'foobar2' );\n\nPostgreSQL 6.4.2\n\ntacacs=> select * from a where b like 'foo%';\nb\n-------\nfoo\nfoobar\nfoobar2\n(3 rows)\n\nPostgreSQL 6.5beta2\n\ntacacs=> select * from a where b like 'foo%';\nb\n-\n(0 rows)\n\ntacacs=> select * from a where b like '%foo';\nb\n---\nfoo\n(1 row)\n\ntacacs=> select * from a where b ~ '^foo';\nb\n-------\nfoo\nfoobar\nfoobar2\n(3 rows)\n\nBye.\n \n-- \n Daniele\n\n-------------------------------------------------------------------------------\n Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n-------------------------------------------------------------------------------\n\n\n",
"msg_date": "Mon, 07 Jun 1999 14:27:46 +0200",
"msg_from": "Daniele Orlandi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "On Mon, 7 Jun 1999, Daniele Orlandi wrote:\n\n> Date: Mon, 07 Jun 1999 14:27:46 +0200\n> From: Daniele Orlandi <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Subject: Re: [HACKERS] Bug in LIKE ?\n> \n> \n> > If I understand this correctly, IMHO, this would be asking for '^Sigma'\n> > with at least one character after the 'a' ...\n> \n> Uhm.... I think the problem is a little worse:\n> \n> create table a ( b varchar(32) );\n> insert into a values ( 'foo' );\n> insert into a values ( 'bar' );\n> insert into a values ( 'foobar' );\n> insert into a values ( 'foobar2' );\n> \n> PostgreSQL 6.4.2\n> \n> tacacs=> select * from a where b like 'foo%';\n> b\n> -------\n> foo\n> foobar\n> foobar2\n> (3 rows)\n> \n> PostgreSQL 6.5beta2\n> \n> tacacs=> select * from a where b like 'foo%';\n> b\n> -\n> (0 rows)\n> \n> tacacs=> select * from a where b like '%foo';\n> b\n> ---\n> foo\n> (1 row)\n> \n> tacacs=> select * from a where b ~ '^foo';\n> b\n> -------\n> foo\n> foobar\n> foobar2\n> (3 rows)\n> \n\nHmm, just tried on current 6.5 from cvs:\ntest=> select version();\nversion \n------------------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc egcs-2.91.66\n(1 row)\n\ntest=> select * from a where b like 'foo%';\nb \n-------\nfoo \nfoobar \nfoobar2\n(3 rows)\n\ntest=> select * from a where b like '%foo';\nb \n---\nfoo\n(1 row)\n\ntest=> select * from a where b ~ '^foo';\nb \n-------\nfoo \nfoobar \nfoobar2\n(3 rows)\n\n\n\tRegards,\n\t\tOleg\n\n\n\n> Bye.\n> \n> -- \n> Daniele\n> \n> -------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> -------------------------------------------------------------------------------\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Mon, 7 Jun 1999 17:00:11 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "Daniele Orlandi <[email protected]> writes:\n> Uhm.... I think the problem is a little worse:\n\nIt's a real bug, and I see the problem: someone changed the handling of\nLIKE prefixes in gram.y, without understanding quite what they were\ndoing. 6.4.2 has:\n\n if (n->val.val.str[pos] == '\\\\' ||\n n->val.val.str[pos] == '%')\n pos++;\n\nwhere 6.5 has:\n\n if (n->val.val.str[pos] == '\\\\' ||\n n->val.val.str[pos+1] == '%')\n pos++;\n\nThe first one is right and the second is not.\n\nUnless we fix this, LIKE will be completely busted for any string\ncontaining non-leading %. Shall I ... ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 09:44:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
},
{
"msg_contents": "On Mon, 7 Jun 1999, Tom Lane wrote:\n\n> Daniele Orlandi <[email protected]> writes:\n> > Uhm.... I think the problem is a little worse:\n> \n> It's a real bug, and I see the problem: someone changed the handling of\n> LIKE prefixes in gram.y, without understanding quite what they were\n> doing. 6.4.2 has:\n> \n> if (n->val.val.str[pos] == '\\\\' ||\n> n->val.val.str[pos] == '%')\n> pos++;\n> \n> where 6.5 has:\n> \n> if (n->val.val.str[pos] == '\\\\' ||\n> n->val.val.str[pos+1] == '%')\n> pos++;\n> \n> The first one is right and the second is not.\n> \n> Unless we fix this, LIKE will be completely busted for any string\n> containing non-leading %. Shall I ... ?\n\nPlease do...looking through the logs, any idea who changed this one? *gets\nout billy club* *grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Mon, 7 Jun 1999 10:50:47 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
},
{
"msg_contents": "> Daniele Orlandi <[email protected]> writes:\n> > Uhm.... I think the problem is a little worse:\n> \n> It's a real bug, and I see the problem: someone changed the handling of\n> LIKE prefixes in gram.y, without understanding quite what they were\n> doing. 6.4.2 has:\n> \n> if (n->val.val.str[pos] == '\\\\' ||\n> n->val.val.str[pos] == '%')\n> pos++;\n> \n> where 6.5 has:\n> \n> if (n->val.val.str[pos] == '\\\\' ||\n> n->val.val.str[pos+1] == '%')\n> pos++;\n> \n> The first one is right and the second is not.\n> \n> Unless we fix this, LIKE will be completely busted for any string\n> containing non-leading %. Shall I ... ?\n\nYes, please. It was me that introduced the bug.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Jun 1999 10:26:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "> > if (n->val.val.str[pos] == '\\\\' ||\n> > n->val.val.str[pos] == '%')\n> > pos++;\n> > \n> > where 6.5 has:\n> > \n> > if (n->val.val.str[pos] == '\\\\' ||\n> > n->val.val.str[pos+1] == '%')\n> > pos++;\n> > \n> > The first one is right and the second is not.\n> > \n> > Unless we fix this, LIKE will be completely busted for any string\n> > containing non-leading %. Shall I ... ?\n> \n> Please do...looking through the logs, any idea who changed this one? *gets\n> out billy club* *grin*\n\nMe, but months ago. Put down the club...\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Jun 1999 10:28:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> [ doesn't see a problem ]\n\nI think the particular test case Daniele gave would only fail if you\ndo not have USE_LOCALE defined. But it's definitely busted: the parser\nwas transforming\n\tb LIKE 'foo%'\ninto\n\tb LIKE 'foo%' AND b >= 'fo%' AND b <= 'fo%\\377'\n\nwith the third clause not present if USE_LOCALE is defined.\n\nAnyway, it's fixed now. I also cleaned up some confusion about whether\n\"%%\" in a LIKE pattern means a literal % (the SQL spec says not, and\nsome parts of the code knew it, but other parts didn't...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 10:33:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
},
{
"msg_contents": "> > > Uhm.... I think the problem is a little worse:\n> > It's a real bug, and I see the problem: someone changed \n> > the handling of LIKE prefixes in gram.y,\n> > Unless we fix this, LIKE will be completely busted for \n> > any string containing non-leading %. Shall I ... ?\n> Yes, please. It was me that introduced the bug.\n\nHow about adding some regression test queries to catch this kind of\nthing? Looks like we don't have *anything* in this area at all except\nfor tests in the multi-byte string handling, from Tatsuo.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 14:47:51 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> How about adding some regression test queries to catch this kind of\n> thing? Looks like we don't have *anything* in this area at all except\n> for tests in the multi-byte string handling, from Tatsuo.\n\nYeah, I was thinking the same thing. I'll bet the MB tests don't catch\nthis bug either, because it's substantially less likely to get noticed\nif USE_LOCALE is on...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 11:07:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
},
{
"msg_contents": "> Anyway, it's fixed now. I also cleaned up some confusion about whether\n> \"%%\" in a LIKE pattern means a literal % (the SQL spec says not, and\n> some parts of the code knew it, but other parts didn't...)\n\nYeah, but until we have support for the ESCAPE clause on the LIKE\nexpression then there isn't a way to get a literal \"%\" into the query\n:( \n\nI would suggest we *do* allow \"%%\" to represent a literal \"%\" until we\nget the full syntax.\n\nimho we will eventually need to move all of this out of gram.y and put\nit deeper into the parser code, since it is munging the query so early\nit is difficult to know what was done for later stages.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 15:27:26 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Anyway, it's fixed now. I also cleaned up some confusion about whether\n>> \"%%\" in a LIKE pattern means a literal % (the SQL spec says not, and\n>> some parts of the code knew it, but other parts didn't...)\n\n> Yeah, but until we have support for the ESCAPE clause on the LIKE\n> expression then there isn't a way to get a literal \"%\" into the query\n> :( \n\nSure there is: \\%. Of course, defaulting to ESCAPE \\ rather than no\nescape is not standards-compliant either, but it's a lot closer than\ninventing a meaning for %% ...\n\nMore to the point, %% has not worked like gram.y thought it did for\na long time, if ever, and no one's complained ...\n\n> imho we will eventually need to move all of this out of gram.y and put\n> it deeper into the parser code, since it is munging the query so early\n> it is difficult to know what was done for later stages.\n\nAgreed. At the very least it should be postponed until we know that the\noperator in question *is* textlike(), and not something else that\nhappens to be named ~~ ... but that's a job for another day.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 11:33:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
},
{
"msg_contents": "> Sure there is: \\%. Of course, defaulting to ESCAPE \\ rather than no\n> escape is not standards-compliant either, but it's a lot closer than\n> inventing a meaning for %% ...\n\nOK.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 15:47:48 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "On Mon, 7 Jun 1999, Thomas Lockhart wrote:\n\n> > Sure there is: \\%. Of course, defaulting to ESCAPE \\ rather than no\n> > escape is not standards-compliant either, but it's a lot closer than\n> > inventing a meaning for %% ...\n> \n> OK.\n\nI thought I had seen something before about this. In the Sybase 4.9 \nquick reference on page 21 it says:\n\n\nTo use %,_,[], or [^] as literal characters in a like match string rather\nthan as wildcards, use square brackets as escape characters for the\npercent sign, the underscore and the open bracket. Use the close bracket\nbut itself. Use the dash as the first character inside a set of brackets.\n\nlike \"5%\"\t5 followed by any string of 0 or more characters\nlike \"5[%]\"\t5%\nlike \"_n\"\tan, in, on, etc.\nlike \"[_]n\"\t_n\nlike \"[a-cdf]\"\ta, b, c, d, or f\nlike \"[-acdf]\"\t-, a, c, d, or f\nlike \"[[]\"\t[\nlike \"]\"\t]\n\nWildcards without like have no special meaning.\n\n\nThat help any?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 7 Jun 1999 12:11:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "> That help any?\n\nYes, it makes us feel better that we are not the only system with a\n\"non-standard\" implementation :)\n\nSince SQL92 has such limited pattern matching, almost everyone has\nsome extensions. Ours are pretty compatible with Sybase's, and with\nanyone else who has full regular expressions...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 07 Jun 1999 16:31:23 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "> > Sure there is: \\%. Of course, defaulting to ESCAPE \\ rather than no\n> > escape is not standards-compliant either, but it's a lot closer than\n> > inventing a meaning for %% ...\n> \n> OK.\n\nBut we have code in DoMatching that does %% to % already. Can we just\nleave it alone and put it back. I promise to implement ESCAPE for 6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 7 Jun 1999 13:20:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> But we have code in DoMatching that does %% to % already.\n\nNo, we don't --- take another look at what it's doing.\n\nIf we did make %% mean a literal %, it would be new behavior as far as\nDoMatch is concerned. I have been playing with this issue using 6.4.2,\nand find that its behavior is extremely inconsistent (ie buggy):\n\nGiven\n\nplay=> select * from a;\nb\n-------\nfoo\nbar\nfoobar\nfoobar2\nfoo%bar\nfooxbar\nfoo.bar\n(7 rows)\n\n6.4.2 produces\n\nplay=> select * from a where b like 'foo%%bar';\nb\n-------\nfoo%bar\n(1 row)\n\nwhich sure looks like it is treating %% as literal %, doesn't it? But\nthe selectivity comes from the parser's inserted conditions\n\tb >= 'foo%bar' AND b <= 'foo%bar\\377'\nwhich eliminate things that DoMatch would take. With a little more\npoking we find\n\nplay=> select * from a where b not like 'foo%%bar';\nb\n-------\nfoo\nbar\nfoobar2\n(3 rows)\n\nand\n\nplay=> select * from a where b like 'foo%%';\nb\n-------\nfoo%bar\n(1 row)\n\nand\n\nplay=> create table pat (p text);\nCREATE\nplay=> insert into pat values ('foo%%bar');\nINSERT 1194153 1\nplay=> select * from a, pat where b like p;\nb |p\n-------+--------\nfoobar |foo%%bar\nfoo%bar|foo%%bar\nfooxbar|foo%%bar\nfoo.bar|foo%%bar\n(4 rows)\n\nIn these cases, the parser's range conditions don't mask the underlying\nbehavior of DoMatch.\n\nSince 6.4.2's behavior with %% is clearly broken and in need of some\nkind of fix, I think we should make it work like the standard says,\nrather than paint ourselves into a corner we'll want to get out of\nsomeday. If %% actually worked reliably, people would start relying\non it. Bad enough that we'll have to keep defaulting to ESCAPE \\\nfor backwards-compatibility reasons; let's not add another deviation\nfrom the spec.\n\nBTW, this is not to discourage you from adding ESCAPE in 6.6 ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 1999 13:59:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug in LIKE ? "
}
] |
[
{
"msg_contents": "6.5 cvs (today), Linux x86 2.2.7, egcs 1.12 release\n\nIt seems that int2 and int4 regression tests failed because of\nchanges in error messages. But I don't know what's going with\ntriggers test. Does is ok ?\n\n\tOleg\n\ndiff results/int2.out expected/int2.out\n10c10\n< ERROR: pg_atoi: error reading \"100000\": Math result not representable\n---\n> ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n\ndiff results/int4.out expected/int4.out\n10c10\n< ERROR: pg_atoi: error reading \"1000000000000\": Math result not representable\n---\n> ERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\n\ndiff results/triggers.out expected/triggers.out\n39d38\n< ERROR: check_primary_key: even number of arguments should be specified\n41d39\n< ERROR: check_primary_key: even number of arguments should be specified\n43d40\n< ERROR: check_primary_key: even number of arguments should be specified\n45d41\n< ERROR: check_primary_key: even number of arguments should be specified\n47c43\n< ERROR: check_primary_key: even number of arguments should be specified\n---\n> ERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys\n49d44\n< ERROR: check_primary_key: even number of arguments should be specified\n51d45\n< ERROR: check_primary_key: even number of arguments should be specified\n53d46\n< ERROR: check_primary_key: even number of arguments should be specified\n55d47\n< ERROR: check_primary_key: even number of arguments should be specified\n57c49\n< ERROR: check_primary_key: even number of arguments should be specified\n---\n> ERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys\n59c51\n< ERROR: check_primary_key: even number of arguments should be specified\n---\n> ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n61,62c53,54\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys are deleted\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys2 are deleted\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n64,65c56,57\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys are deleted\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys2 are deleted\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n67,68c59,60\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys are deleted\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys2 are deleted\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n70,72c62,63\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys are deleted\n< NOTICE: check_pkeys_fkey_cascade: 0 tuple(s) of fkeys2 are deleted\n< ERROR: Cannot insert a duplicate key into a unique index\n---\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 7 May 1999 10:07:57 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "cvs 6.5 regression tests on Linux x86"
}
] |
[
{
"msg_contents": "\n> diff results/triggers.out expected/triggers.out\n> 39d38\n> < ERROR: check_primary_key: even number of arguments should be specified\n> 41d39\n> < ERROR: check_primary_key: even number of arguments should be specified\n> \nSomebody submitted a patch to refint.c, that was only intended as an idea.\nI would like to see this patch to check_primary_key(...) reverted, since it:\n1. does not give a correct ERROR message (coding error)\n2. supplies funktionality, that is rather obscure\n\t(it triggers an insert into a primary table on an insert to a\nforeign key table that \n\twould otherwise violate the foreign key constraint)\n3. busted the regression test\n\nIf you need more info on this issue feel free to ask.\nAndreas \n\n",
"msg_date": "Fri, 7 May 1999 10:43:26 +0200 ",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: [HACKERS] cvs 6.5 regression tests on Linux x86"
},
{
"msg_contents": "ZEUGSWETTER Andreas IZ5 <[email protected]> writes:\n>> ERROR: check_primary_key: even number of arguments should be specified\n>> \n> Somebody submitted a patch to refint.c, that was only intended as an idea.\n> I would like to see this patch to check_primary_key(...) reverted,\n\nI agree, since no one seems to be willing to take responsibility for\nfixing it. Unless someone steps up to bat in the next day or so,\nI will take responsibility for backing out the change ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 09:41:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] cvs 6.5 regression tests on Linux x86 "
}
] |
[
{
"msg_contents": "Hello,\n\n(using snapshot of May, 5th)\n\nbecause we have the need to have a workaround to this hash problem, I looked into the hashing code (well, without having the background). \n\nFor reducing the probability of an overflow I increased\n#define FUDGE_FAC 3\nwitch was originally 1.5. I think it´s a data dependend constant (correct?) and for my data it works...\n\nIt does the job, but certainly that this is not the solution.\n\nIncreasing -B 256 doesn´t work:\nNOTICE: Buffer Leak: [248] (freeNext=0, freePrev=0, relname=, blockNum=0, flags=0x0, refcount=0 25453)\npq_flush: send() failed, errno 88\npq_recvbuf: recv() failed, errno=88\n\nKind regards,\n\nMichael Contzen\[email protected]\nDohle Systemberatung, Germany\n\n\nKind regards,\n\nMichael Contzen\[email protected]\nDohle Systemberatung, Germany\n\n\n\n\n\n\n\nHello,\n \n(using snapshot of May, 5th)\n \nbecause we have the need to have a workaround to \nthis hash problem, I looked into the hashing code (well, without having the \nbackground). \n \nFor reducing the probability of an overflow I \nincreased\n#define FUDGE_FAC 3\nwitch was originally 1.5. I think it´s a data \ndependend constant (correct?) and for my data it works...\n \nIt does the job, but certainly that this is not the \nsolution.\n \nIncreasing -B 256 doesn´t work:\nNOTICE: Buffer Leak: [248] (freeNext=0, \nfreePrev=0, relname=, blockNum=0, flags=0x0, refcount=0 25453)pq_flush: \nsend() failed, errno 88pq_recvbuf: recv() failed, errno=88\n \nKind regards,\n \nMichael Contzen\[email protected]\nDohle Systemberatung, Germany\n \nKind regards,\n \nMichael Contzen\[email protected]\nDohle Systemberatung, Germany",
"msg_date": "Fri, 7 May 1999 11:11:30 +0100",
"msg_from": "Michael Contzen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hashjoin status report"
},
{
"msg_contents": "Michael Contzen <[email protected]> writes:\n> (using snapshot of May, 5th)\n> because we have the need to have a workaround to this hash problem, I\n> looked into the hashing code (well, without having the background).\n> For reducing the probability of an overflow I increased\n> #define FUDGE_FAC 3\n> witch was originally 1.5.\n\nFor a given -B setting, that would mean that more of the hashtable space\nis reserved for overflow records and less for hashbuckets, which should\nreduce the probability of an overrun --- but it would also make the\nsystem more prone to decide that it needs to divide the hash merge into\n\"batches\", so performance will suffer. Still, it seems like a\nreasonable workaround until a proper fix can be made. In fact I think\nmaybe I should change FUDGE_FAC to 2.0 for the 6.5 release, as a stopgap\nmeasure...\n\nA more critical problem is that there were some severe bugs in the code\nfor handling batches. I fixed at least some of 'em, but I committed\nthose fixes on the evening of 5 May, so I suspect they are not in your\nsnapshot. (Check the date of src/backend/executor/nodeHash.c to see.)\n\n> Increasing -B 256 doesn't work:\n> NOTICE: Buffer Leak: [248] (freeNext=3D0, freePrev=3D0, relname=3D, =\n> blockNum=3D0, flags=3D0x0, refcount=3D0 25453)\n> pq_flush: send() failed, errno 88\n\nThis behavior could be an artifact of one of the bugs I fixed (which\nwas a large-scale memory clobber). Or it could be another bug entirely.\nThis one actually worries me a great deal more than the \"out of memory\"\nproblem, because that one I know how and where to fix. If this is a\nseparate bug then I don't know where it's coming from. Please upgrade\nto latest snapshot and check -B 256 again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 09:59:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report "
},
{
"msg_contents": "> Michael Contzen <[email protected]> writes:\n> > (using snapshot of May, 5th)\n> > because we have the need to have a workaround to this hash problem, I\n> > looked into the hashing code (well, without having the background).\n> > For reducing the probability of an overflow I increased\n> > #define FUDGE_FAC 3\n> > witch was originally 1.5.\n> \n> For a given -B setting, that would mean that more of the hashtable space\n> is reserved for overflow records and less for hashbuckets, which should\n> reduce the probability of an overrun --- but it would also make the\n> system more prone to decide that it needs to divide the hash merge into\n> \"batches\", so performance will suffer. Still, it seems like a\n> reasonable workaround until a proper fix can be made. In fact I think\n> maybe I should change FUDGE_FAC to 2.0 for the 6.5 release, as a stopgap\n> measure...\n> \n> A more critical problem is that there were some severe bugs in the code\n> for handling batches. I fixed at least some of 'em, but I committed\n> those fixes on the evening of 5 May, so I suspect they are not in your\n> snapshot. (Check the date of src/backend/executor/nodeHash.c to see.)\n\nOne thing to consider. If you decide to wait on the patch until after\n6.5, but then we find the new optimizer is causing this bug too often,\nwe will have to fix it later in the beta cycle with less testing time\navailable.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 May 1999 18:53:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hashjoin status report"
}
] |
[
{
"msg_contents": "It seems there are some pretty critical problems in the indexes and/or\nvacuuming code in PostgreSQL 6.4.2. I've mentioned this before, but never\nget any clues back.\n\nHere are some error codes that I get almost every nite when I vacuum....\n\nNOTICE: AbortTransaction and not in in-progress state\nERROR: heap_delete: (am)invalid tid\n\n...And...\n\nNOTICE: Rel pg_statistic: Uninitialized page 2 - fixing\nERROR: Tuple is too big: size 15248\n\nCan anyone please shed some light on this? I don't have any code that would\nbe creating tuples bigger than 8100k, so the 15248 is not possible.\n\nIn addition, the \"tuple is too big\" error goes away in the second vacuum.\nIt's there one night, but not the next, and I never delete records out of\nthis database. The first errors up there require me to drop the indexes on\nthis 3GB table and rebuild. That's a pain.\n\nTim Perdue\nPHPBuilder.com / GotoCity.com / Geocrawler.com\n\n\n\n",
"msg_date": "Fri, 7 May 1999 06:16:25 -0500",
"msg_from": "\"Tim Perdue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird Errors in 6.4.2 - Indexes/Vacuuming"
}
] |
[
{
"msg_contents": "Hi,\nI�m trying of to make a function that receive two arguments :\n- An array of int4\n- A int4\nThe target of this function is to evaluate if the numeric argument is or\nnot into of the array !!!!!\n/* If anybody know of some predefined function that has the same target,\ntell me !!!*/\n\nThe code is following (tester.c)->\n\n#include <pgsql/postgres.h>\n#include <stdio.h>\n\nbool tester(int4 o[],int4 a){\nint i;\nfor(i=0;o[i]!=0;i++)\n if (o[i]==a)\n return('t');\nreturn('f');\n}\n\nthen I compile the code\n\ngcc -c tester.c /*Thanks to */\ngcc -shared -o tester.so tester.o /*Michael J. Davis*/\n\n, and get tester.so\n\nI create the function in PostgreSQL ->\n\ndbtest=>create function tester(_int4,int4) returns bool as\n'$path/tester.so' language 'c';\nCREATE\n\nBut when try to use it, ->\n\ndbtest=>select tester('{1,2,3,4}',3);\nERROR: stat failed on file tester.so\n/*Here I hope a 't' */\n\nI don�t know what�s happens !!!!!\n\nCarlos Peralta Ram�rez !!!!!\n\n",
"msg_date": "Fri, 07 May 1999 09:56:44 -0500",
"msg_from": "Carlos Peralta Ramirez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Functions for arrays !!!!"
}
] |
[
{
"msg_contents": "\"Vadim B. Mikheev - CVS\" <vadim> writes:\n> Update of /usr/local/cvsroot/pgsql/src/backend/parser\n> In directory hub.org:/tmp/cvs-serv45846/backend/parser\n> Modified Files:\n> \tgram.c \n> Log Message:\n> Fix LMGR for MVCC.\n> Get rid of Extend lock mode.\n\nVadim, why are you committing changes to gram.c? For that matter,\n*how* are you committing changes to gram.c? The CVS server ought to\nknow that gram.c is a dead file for the main development branch.\nIt shouldn't accept updates, I would think.\n\nI speculate something is messed up in your CVS control files for\nsrc/backend/parser/. Perhaps you should delete that whole directory\nand pull a fresh copy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 12:22:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/src/backend/parser gram.c' "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Vadim B. Mikheev - CVS\" <vadim> writes:\n> > Update of /usr/local/cvsroot/pgsql/src/backend/parser\n> > In directory hub.org:/tmp/cvs-serv45846/backend/parser\n> > Modified Files:\n> > gram.c\n> > Log Message:\n> > Fix LMGR for MVCC.\n> > Get rid of Extend lock mode.\n> \n> Vadim, why are you committing changes to gram.c? For that matter,\n> *how* are you committing changes to gram.c? The CVS server ought to\n> know that gram.c is a dead file for the main development branch.\n> It shouldn't accept updates, I would think.\n> \n> I speculate something is messed up in your CVS control files for\n> src/backend/parser/. Perhaps you should delete that whole directory\n> and pull a fresh copy.\n\nThanks! I removed gram.c from src/backend/parser/CVS/Entries.\nI don't know how this was messed up.\n\nVadim\n",
"msg_date": "Sun, 09 May 1999 21:56:40 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [COMMITTERS] 'pgsql/src/backend/parser gram.c'"
}
] |
[
{
"msg_contents": "With fairly current sources:\n\nregression=> create table dnum (f1 numeric(10,2));\nCREATE\nregression=> insert into dnum values ('12.34');\nERROR: overflow on numeric ABS(value) >= 10^1 for field with precision 31491 scale 52068\nregression=> insert into dnum1 values ('12.34'::numeric);\nERROR: overflow on numeric ABS(value) >= 10^1 for field with precision 31491 scale 52132\nregression=> insert into dnum1 values (12.34::numeric);\nERROR: parser_typecast: cannot cast this expression to type 'numeric'\nregression=> insert into dnum1 values (12.34);\nINSERT 950499 1\n\nI've not put in Thomas' proposed change for handling out-of-range\nconstants; I don't think it'd change any of these cases anyway.\n\nBTW, why is there no regression test for NUMERIC?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 18:14:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "NUMERIC type conversions leave much to be desired"
},
{
"msg_contents": "> With fairly current sources:\n<snip>\n> ERROR: overflow on numeric ABS(value) >= 10^1\n> for field with precision 31491 scale 52068\n\npostgres=> create table dnum (f1 numeric(10,2));\npostgres=> insert into dnum values ('12.34');\npostgres=> insert into dnum values ('12.34'::numeric);\npostgres=> insert into dnum values (12.34);\npostgres=> insert into dnum values ('12.345');\npostgres=> select * from dnum;\n f1\n-----\n12.34\n12.34\n12.34\n12.35\n(4 rows)\n\nfwiw, I've seen the same internal problems, and managed to fix them\nwith a full clean and reinstall. I'm probably a week old on my tree\nvintage...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 02:47:58 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> With fairly current sources:\n> <snip>\n>> ERROR: overflow on numeric ABS(value) >= 10^1\n>> for field with precision 31491 scale 52068\n\n> fwiw, I've seen the same internal problems, and managed to fix them\n> with a full clean and reinstall. I'm probably a week old on my tree\n> vintage...\n\nGood thought but no cigar ... I did a full fresh CVS checkout, configure,\nmake, install & initdb from scratch, and it still behaves the same.\nPossibly it's been broken sometime in the last week? utils/adt/numeric.c\nlooks to have been last committed on 4 May.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 May 1999 12:27:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired "
},
{
"msg_contents": "> > fwiw, I've seen the same internal problems, and managed to fix them\n> > with a full clean and reinstall. I'm probably a week old on my tree\n> > vintage...\n> Good thought but no cigar ... I did a full fresh CVS checkout, configure,\n> make, install & initdb from scratch, and it still behaves the same.\n> Possibly it's been broken sometime in the last week? utils/adt/numeric.c\n> looks to have been last committed on 4 May.\n\nNo, those changes were actually by myself, and just modified one line\nin the float8->numeric conversion code to use a direct sprintf(\"%f\")\nto convert float8 to a string in preparation for conversion to\nnumeric. The old code used the float8out() routine, which for larger\nfloats generated exponential notation that the numeric_in() routine\nwasn't prepared to handle.\n\nI've seen the same problems you are having, and reported them just as\nyou have (\"something is fundamentally wrong with numeric...\"). And\nthen the problems went away, but I'm not actually certain why.\n\nLet me know if I can help track it down, since others might get bit\ntoo...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 17:01:19 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I've seen the same problems you are having, and reported them just as\n> you have (\"something is fundamentally wrong with numeric...\"). And\n> then the problems went away, but I'm not actually certain why.\n\nAh-hah, I've sussed it. numeric_in() was declared in pg_proc as\ntaking one parameter, when in fact it takes three. Therefore, when\ncalling it via fmgr (as the parser does), the second and third\nparameters were passed as random garbage. Apparently, the code\ngenerated for your machine produced some fairly harmless garbage...\nbut not on mine.\n\nI've committed a pg_proc.h update to fix this; it will take a full\nrebuild and initdb to propagate the fix, of course.\n\nI'm still seeing\n\nregression=> insert into dnum values (12.34::numeric); \nERROR: parser_typecast: cannot cast this expression to type 'numeric'\n\nwhich is arising from parser_typecast's inability to cope with\na T_Float input node. I suspect it could be readily done along\nthe same lines as T_Integer is handled, but I'll leave that to you.\n\n\t\t\tregards, tom lane\n\nPS: can anyone think of a reasonable way of mechanically checking\npg_proc entries against the actual C definitions of the functions?\nI grovelled through all the typinput functions by hand to verify\nthat numeric_in was the only one with this bug ... but I ain't\nabout to do that for all 989 pg_proc entries for built-in functions.\nMuch less to do it again for future releases.\n",
"msg_date": "Sat, 08 May 1999 22:37:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired "
},
{
"msg_contents": "> Ah-hah, I've sussed it. numeric_in() was declared in pg_proc as\n> taking one parameter, when in fact it takes three. Therefore, when\n> calling it via fmgr (as the parser does), the second and third\n> parameters were passed as random garbage. Apparently, the code\n> generated for your machine produced some fairly harmless garbage...\n> but not on mine.\n> I've committed a pg_proc.h update to fix this...\n\nGreat.\n\n> I'm still seeing\n> regression=> insert into dnum values (12.34::numeric);\n> ERROR: parser_typecast: cannot cast this expression to type 'numeric'\n> which is arising from parser_typecast's inability to cope with\n> a T_Float input node. I suspect it could be readily done along\n> the same lines as T_Integer is handled, but I'll leave that to you.\n\npostgres=> select 12.34::numeric;\n--------\n 12.34\n(1 row)\n\nOK, and while I was looking at it I noticed that the T_Integer code\ndidn't bother using the int4out() routine to generate a string. imho\nit should be using the official output routine unless there is some\ncompelling reason not to. It seems to still behave with this fix in\nthe T_Integer support code; should I commit both?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California",
"msg_date": "Sun, 09 May 1999 03:58:32 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n> OK, and while I was looking at it I noticed that the T_Integer code\n> didn't bother using the int4out() routine to generate a string. imho\n> it should be using the official output routine unless there is some\n> compelling reason not to. It seems to still behave with this fix in\n> the T_Integer support code; should I commit both?\n\nOne potential problem is that if the value is large/small enough to make\nfloat8out use 'E' notation, conversion to numeric will still fail ---\nthis is the same problem you hacked around in float8_numeric() earlier.\n\nI still like the idea of hanging on to the original string form of the\nconstant long enough so that parser_typecast can feed that directly to\nthe target type's xxx_in() routine, and not have to worry about\nconversion errors.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 12:04:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired "
},
{
"msg_contents": "> One potential problem is that if the value is large/small enough to make\n> float8out use 'E' notation, conversion to numeric will still fail ---\n> this is the same problem you hacked around in float8_numeric() earlier.\n\nYup. But in the long run, that is a problem for numeric(), not float8.\nIf nothing else, numeric() should be willing to flag cases where it\nhas trouble, and it doesn't seem to do that. That should probably be\nconsidered a \"must fix\" for v6.5.\n\n> I still like the idea of hanging on to the original string form of the\n> constant long enough so that parser_typecast can feed that directly to\n> the target type's xxx_in() routine, and not have to worry about\n> conversion errors.\n\nI agree. I'm just worried about losing the typing hints provided by\nscan.l if we went to a \"string only\" solution. Also, there might be a\nperformance hit if we ended up having to do the string conversion too\nmany times. \n\nAt this late date, I'm (so far) happy doing the kinds of fixes we've\ndone, but we should revisit the issue for v6.5.x...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 09 May 1999 21:16:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired"
},
{
"msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I still like the idea of hanging on to the original string form of the\n>> constant long enough so that parser_typecast can feed that directly to\n>> the target type's xxx_in() routine, and not have to worry about\n>> conversion errors.\n\n> I agree. I'm just worried about losing the typing hints provided by\n> scan.l if we went to a \"string only\" solution.\n\nNo no, I didn't say that you can't keep T_Integer and T_Float nodes\nseparate. I was just suggesting that the *value* of one of these nodes\nmight be kept as a string (or, perhaps, both as a string and the numeric\nformat). That way, if you need to convert to some other type, you start\nfrom the original string and don't have to risk a \"lossy compression\"\ninto floating point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 17:41:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] NUMERIC type conversions leave much to be desired "
}
] |
[
{
"msg_contents": "See subject. This is a pretty serious shortcoming for anyone trying\nto use NUMERIC...\n\nIt'd also be nice if psql included the precision in \\d display,\nbut that's not critical.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 May 1999 18:30:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump doesn't dump NUMERIC precision info"
}
] |
[
{
"msg_contents": "Hello all,\n\nI got some errors when testing vacuum with other concurrent \nsessions. The following patch would fix some of the cases such that \n\n\t1.ERROR : moving chain: failed to add item with len = ......\n\t2.ERROR : Cannot insert a duplicate key into a unique index\n\n\nAnother bug seems to remain unsolved.\n\nVACUUM shows\n \n\tNOTICE :NUMBER OF INDEX' TUPLES (...) IS NOT THE SAME\n \tAS HEAP' (...)\n\nand after vacuum other sessions show \n \n\tERROR : Cannot insert a duplicate key into a unique index\n\nAFAIC when moving update chain of tuples,vpd_offsets is maintained \nonly for one page,even if tuples in chain exist in plural pages.\nSo there seems to be cases that some index tuples remain alive \nwhich point out invalid(or nonexistent by truncation) tids after vacuum. \n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n*** backend/commands/vacuum.c.orig\tTue Apr 13 16:01:16 1999\n--- backend/commands/vacuum.c\tSat May 8 17:23:50 1999\n***************\n*** 1336,1342 ****\n \t\t\t\t\t */\n \t\t\t\t\tToPage = BufferGetPage(cur_buffer);\n \t\t\t\t\t/* if this page was not used before - clean it */\n! \t\t\t\t\tif (!PageIsEmpty(ToPage) && vtmove[i].cleanVpd)\n \t\t\t\t\t\tvc_vacpage(ToPage, vtmove[ti].vpd);\n \t\t\t\t\theap_copytuple_with_tuple(&tuple, &newtup);\n \t\t\t\t\tRelationInvalidateHeapTuple(onerel, &tuple);\n--- 1336,1342 ----\n \t\t\t\t\t */\n \t\t\t\t\tToPage = BufferGetPage(cur_buffer);\n \t\t\t\t\t/* if this page was not used before - clean it */\n! \t\t\t\t\tif (!PageIsEmpty(ToPage) && vtmove[ti].cleanVpd)\n \t\t\t\t\t\tvc_vacpage(ToPage, vtmove[ti].vpd);\n \t\t\t\t\theap_copytuple_with_tuple(&tuple, &newtup);\n \t\t\t\t\tRelationInvalidateHeapTuple(onerel, &tuple);\n***************\n*** 1355,1361 ****\n \t\t\t\t\tnewitemid = PageGetItemId(ToPage, newoff);\n \t\t\t\t\tpfree(newtup.t_data);\n \t\t\t\t\tnewtup.t_data = (HeapTupleHeader) PageGetItem(ToPage, newitemid);\n! \t\t\t\t\tItemPointerSet(&(newtup.t_self), vtmove[i].vpd->vpd_blkno, newoff);\n \t\t\t\t\t/*\n \t\t\t\t\t * Set t_ctid pointing to itself for last tuple in\n \t\t\t\t\t * chain and to next tuple in chain otherwise.\n--- 1355,1361 ----\n \t\t\t\t\tnewitemid = PageGetItemId(ToPage, newoff);\n \t\t\t\t\tpfree(newtup.t_data);\n \t\t\t\t\tnewtup.t_data = (HeapTupleHeader) PageGetItem(ToPage, newitemid);\n! \t\t\t\t\tItemPointerSet(&(newtup.t_self), vtmove[ti].vpd->vpd_blkno, newoff);\n \t\t\t\t\t/*\n \t\t\t\t\t * Set t_ctid pointing to itself for last tuple in\n \t\t\t\t\t * chain and to next tuple in chain otherwise.\n\n",
"msg_date": "Sat, 8 May 1999 17:57:23 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "MVCC vacuum error"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Hello all,\n> \n> I got some errors when testing vacuum with other concurrent\n> sessions. The following patch would fix some of the cases such that\n...\n\nThanks Hiroshi!\nI still hadn't time to test vacuum being busy with locking -:(\n\nApplyed & committed.\n\nVadim\n",
"msg_date": "Sun, 09 May 1999 22:00:39 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MVCC vacuum error"
}
] |
[
{
"msg_contents": "Hi, I'm a programmer in Brazil (i.e., kinda far away from the\ntechnological mainstream) and I've been trying to learn more about the\nPostgreSQL internals, maybe even help with its development. So I\ndownloaded the source tarball for the last release. Thing is, last time\nI checked, there was minimal developer-oriented documentation, and all\nthose hundreds of barely-commented source files can be very intimidating\n- specially when you're not looking for anything specific, but only\ntrying to figure out how the software /works/.\n\nI understand that this is what I should expect since you are already\nfamiliar with the source code, but you could use some fresh blood,\nright? So I'd really appreciate it if anyone would point me in the\ndirection of some in-depth docos, detailed texts about the Postgres\narchitecture or anything else that would help me get started hacking.\n\n\nThanks in advance,\n\n-- \nRafael Kaufmann\n<[email protected]>\n",
"msg_date": "Sat, 08 May 1999 10:13:20 -0300",
"msg_from": "Rafael Kaufmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help a newbie?"
},
{
"msg_contents": "Rafael Kaufmann <[email protected]> writes:\n> technological mainstream) and I've been trying to learn more about the\n> PostgreSQL internals, maybe even help with its development. So I\n> downloaded the source tarball for the last release.\n\nActually, I'd recommend working from a recent snapshot --- there's been\nmany changes and improvements since 6.4.2.\n\n> So I'd really appreciate it if anyone would point me in the\n> direction of some in-depth docos, detailed texts about the Postgres\n> architecture or anything else that would help me get started hacking.\n\nThere is a fair amount of stuff in the Administrator's Guide,\nProgrammer's Guide, and Developer's Guide parts of the manual.\nIt won't all make sense on first reading, but I'd certainly recommend\nreading those parts of the manual thoroughly.\n\nThere are also useful bits of documentation buried in less-obvious\nplaces, such as the backend flowchart in src/tools/backend/flow.jpg\nand the README files found in many of the sourcecode directories.\n\nOne thing to watch out for is that not all of the doco is up to date\n:-(. Checking the source code is always the most reliable guide to\nHow It Really Works.\n\nA really useful trick is to set up a full-text index of the source tree\n(I use 'glimpse' from http://glimpse.cs.arizona.edu/) so that you can\neasily find all the uses of a particular routine, look for the place\nwhere a particular error message is generated, etc.\n\nIf you want to delve into the actual database algorithms then you should\nhave one of the standard database textbooks at hand (anyone care to\nrecommend some titles?).\n\nIf you've got specific questions about where to look for particular\nfunctions, feel free to ask ... \n\nWelcome aboard!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 May 1999 11:12:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help a newbie? "
},
{
"msg_contents": "> Rafael Kaufmann <[email protected]> writes:\n> > technological mainstream) and I've been trying to learn more about the\n> > PostgreSQL internals, maybe even help with its development. So I\n> > downloaded the source tarball for the last release.\n> \n> Actually, I'd recommend working from a recent snapshot --- there's been\n> many changes and improvements since 6.4.2.\n> \n> > So I'd really appreciate it if anyone would point me in the\n> > direction of some in-depth docos, detailed texts about the Postgres\n> > architecture or anything else that would help me get started hacking.\n> \n> There is a fair amount of stuff in the Administrator's Guide,\n> Programmer's Guide, and Developer's Guide parts of the manual.\n> It won't all make sense on first reading, but I'd certainly recommend\n> reading those parts of the manual thoroughly.\n> \n> There are also useful bits of documentation buried in less-obvious\n> places, such as the backend flowchart in src/tools/backend/flow.jpg\n> and the README files found in many of the sourcecode directories.\nDon't forget the developers FAQ.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 8 May 1999 12:51:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help a newbie?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> There is a fair amount of stuff in the Administrator's Guide,\n> Programmer's Guide, and Developer's Guide parts of the manual.\n> It won't all make sense on first reading, but I'd certainly recommend\n> reading those parts of the manual thoroughly.\n\nI already did, but not very carefully. So I'll be sure to read those\nagain. Nonetheless, the absence of a text specifically detailing the\ninner workings of the system \n\n<SNIP>\n\n> \n> A really useful trick is to set up a full-text index of the source tree\n> (I use 'glimpse' from http://glimpse.cs.arizona.edu/) so that you can\n> easily find all the uses of a particular routine, look for the place\n> where a particular error message is generated, etc.\n\nI'll be sure to get Glimpse running as soon as I have a stable release\nof LinuxPPC R5 (for which I've been waiting since December) on my box.\n(Yes, that does mean I'm still stuck with the mockery of a system that\nis the MacOS... don't flame me)\n\n> \n> If you want to delve into the actual database algorithms then you should\n> have one of the standard database textbooks at hand (anyone care to\n> recommend some titles?).\n\nSo I guess that means I'll be paying computerliteracy.com a little\nvisit, yes? (Or is there stuff like that available online?)\n\n<SNIP>\n\n> \n> Welcome aboard!\n\nHey, thanks! :)\n\n\n-- \nRafael Kaufmann\n<[email protected]>\n",
"msg_date": "Sat, 08 May 1999 14:06:55 -0300",
"msg_from": "Rafael Kaufmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Help a newbie?"
},
{
"msg_contents": "> Don't forget the developers FAQ.\n\n... and there is some new stuff donated by Stefan Simkovics from his\nMaster's Thesis which walks you through the query processing. I'm in\nthe process of integrating it into the docs; you might find the html\ndocs on the web site or look for the html docs built in\n\n ftp://ftp.postgresql.org/pub/doc/*.tar.gz\n\nThe first chapters in the Developer's or Programmer's Guide contain\nthis.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 17:18:36 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Help a newbie?"
}
] |
[
{
"msg_contents": "This is a post from mod_perl mailing list. I think it would be interesting\nto have a *real* benhcmarks, say more or less standard Web+db application\nwith using modern technique like mod_perl and persistent connection to db.\nI'm using postgres since 1995 and quite satisfied with its features and\nfast development, and support from mailing list. But in real life\nevery project needs good presentation and it's very difficult to \nexplain your boss or customer that Postgres is a good software without\nreal benchmarks and happy stories. Also, good web site is very important,\nespecially for attracting of new users. There was a thread in mailing list\nabout new feel'n look of www.postgresql.org and I saw some very promising\nvariants, what's going with this ?\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Sat, 08 May 1999 14:03:39 +0000\nFrom: Matt Sergeant <[email protected]>\nTo: Steve Maring <[email protected]>\nCc: [email protected], [email protected]\nSubject: Re: help impressing the crowd\n\nSteve Maring wrote:\n> \n> I'm working on a customer support site for GTE Enterprise Solutions. We\n> provide real estate MLS services to REALTOR associations around the\n> country. The total customer base is about 120,000 and quickly growing.\n> The site will be a main source of information for customers. It uses\n> mod_perl, OpenSSL, HTML::Embperl, Apache::Session, and PHP3 for some\n> legacy stuff. It has an architecture that feeds everything dynamically\n> from a database using session management and per user routing and access\n> restrictions. I am in the process of studying design patterns right now\n> to see what the best fit is for this application. I will then be\n> optimizing for database connections, database performance (PostgreSQL),\n ^^^^^^^^^^^^\n\nWe've found PostgreSQL to be a severe bottleneck on our system. We're\nnot sure how much of a speedup we can get by migrating to a better DBMS,\nbut we're fairly sure it would be good. Unfortunately postgreSQL just\nisn't fast enough for a high transaction web site IMHO. We just haven't\nhad time yet to transfer it to MySQL or SQL Server (hack, puke,\nchoke..). But we have done some profiling that indicated that postgreSQL\nwas the bottleneck. All this and we're parsing XML on the server too...\n(for those that don't know - XML parsing is quite a bad bottleneck in\nitself).\n\nCan't really help Stas out with the link though - it's intranet stuff\nonly. Although it is a company wide timesheet system for 600 (so far)\nusers. Going very well. I may be giving a presentation at TPC, so I\ndon't really want any spoilers.\n\n-- \n<Matt/>\n\n| FastNet Software Ltd | XML | Perl | Databases |\n| http://come.to/fastnet | Bringing your data onto the Web |\n| See web site for details, articles, FAQ's and more |\n| ICQ# 14968768 | Email for contract availabilty |\n\n\n",
"msg_date": "Sat, 8 May 1999 17:34:51 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: help impressing the crowd (fwd)"
}
] |
[
{
"msg_contents": "Thus spake Thomas Lockhart\n> > I think the docs for SELECT INTO should be changed, as (if memory\n> > serves) it always creates, while INSERT INTO should always require the\n> > table to already exist.\n> \n> I went back and tested v6.3.2 and found the same behavior. D'Arcy,\n> would you have time to touch the docs? It would be in\n> \n> doc/src/sgml/ref/{insert,select}.sgml\n> \n> Note that there is a separate section for SELECT INTO near the bottom\n> of select.sgml.\n\nDoes this cover it?\n\nRCS file: RCS/select.sgml,v\nretrieving revision 1.1\ndiff -c -r1.1 select.sgml\n*** select.sgml 1999/05/08 13:52:15 1.1\n--- select.sgml 1999/05/08 13:56:18\n***************\n*** 85,92 ****\n If the INTO TABLE clause is specified, the result of the\n query will be stored in another table with the indicated\n name.\n! If <replaceable class=\"PARAMETER\">new_table</replaceable> does\n! not exist, it will be created automatically.\n Refer to <command>SELECT INTO</command> for more information.\n <note>\n <para>\n--- 85,92 ----\n If the INTO TABLE clause is specified, the result of the\n query will be stored in another table with the indicated\n name.\n! The <replaceable class=\"PARAMETER\">new_table</replaceable> will\n! be created automatically and should not exist before this command.\n Refer to <command>SELECT INTO</command> for more information.\n <note>\n <para>\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 8 May 1999 09:58:16 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] INSERT INTO"
},
{
"msg_contents": "> Does this cover it?\n\nGreat. Thanks!\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 16:45:45 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO"
},
{
"msg_contents": "APplied.\n\n\n> Thus spake Thomas Lockhart\n> > > I think the docs for SELECT INTO should be changed, as (if memory\n> > > serves) it always creates, while INSERT INTO should always require the\n> > > table to already exist.\n> > \n> > I went back and tested v6.3.2 and found the same behavior. D'Arcy,\n> > would you have time to touch the docs? It would be in\n> > \n> > doc/src/sgml/ref/{insert,select}.sgml\n> > \n> > Note that there is a separate section for SELECT INTO near the bottom\n> > of select.sgml.\n> \n> Does this cover it?\n> \n> RCS file: RCS/select.sgml,v\n> retrieving revision 1.1\n> diff -c -r1.1 select.sgml\n> *** select.sgml 1999/05/08 13:52:15 1.1\n> --- select.sgml 1999/05/08 13:56:18\n> ***************\n> *** 85,92 ****\n> If the INTO TABLE clause is specified, the result of the\n> query will be stored in another table with the indicated\n> name.\n> ! If <replaceable class=\"PARAMETER\">new_table</replaceable> does\n> ! not exist, it will be created automatically.\n> Refer to <command>SELECT INTO</command> for more information.\n> <note>\n> <para>\n> --- 85,92 ----\n> If the INTO TABLE clause is specified, the result of the\n> query will be stored in another table with the indicated\n> name.\n> ! The <replaceable class=\"PARAMETER\">new_table</replaceable> will\n> ! be created automatically and should not exist before this command.\n> Refer to <command>SELECT INTO</command> for more information.\n> <note>\n> <para>\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:19:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO"
}
] |
[
{
"msg_contents": "> Update regress test for CASE to enable tests involving joins.\n\n*Thanks* for finding and fixing the problems with CASE. I was hoping\nsomeone would track them down since I couldn't seem to get a handle on\nit...\n\nbtw, I'm working on additions to the User's Guide to start documenting\nCOALESCE(), IFNULL(), and CASE(), now that they seem to work.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sat, 08 May 1999 17:04:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] 'pgsql/src/test/regress/expected case.out'"
}
] |
[
{
"msg_contents": "Hey,\nFound a little \"bug\" in pg_dump today.\n[snip]\n-- finding the attribute names and types for each table \n-- finding the attrs and types for table: 'members' \n-- finding the attrs and types for table: 'currentuser' \n-- finding DEFAULT expression for attr: 'id' \n-- finding the attrs and types for table: 'memberaccess' \n-- finding DEFAULT expression for attr: 'id' \n-- flagging inherited attributes in subtables \n-- dumping out user-defined types \n-- dumping out tables \n-- dumping out user-defined procedural languages \n-- dumping out user-defined functions \n-- dumping out user-defined aggregates \n-- dumping out user-defined operators \n-- dumping out the contents of all of 5 tables \n-- dumping out the contents of Table 'members' \n-- dumping out the contents of Table 'currentuser' \n-- dumping out the contents of Table 'memberaccess' \n\n | postgres | currentuser | table |\n | postgres | currentuser_id_seq | sequence |\n | postgres | memberaccess | table |\n | postgres | memberaccess_id_seq | sequence |\n | postgres | members | table |\n\nAs you can see, it says it's dumping out 5 tables, while there is only 3\nreal tables. I guess it's also counting the 2 sequences as tables(or\ntuples in this case). This might be right(sequences being tuples), but in\nthis case they should in my opinion not be counted..\n\nAlso, in getTables() in pg_dump.c there are at least a couple of these:\n if (!res ||\n PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n fprintf(stderr, \"BEGIN command failed\\n\");\n exit_nicely(g_conn);\n }\n\nShouldn't this be more like\n if (!res ||\n PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n fprintf(stderr, \"BEGIN command failed(%s)\\n\", PGresultErrorMessage(res));\n exit_nicely(g_conn);\n }\nor\n if (!res)\n {\n fprintf(stderr, \"BEGIN command failed\\n\");\n exit_nicely(g_conn);\n } else if(PGresultStatus(res) != PGRES_COMMAND_OK) {\n fprintf(stderr, \"BEGIN command failed. ERROR: %s\\n\", PGresultErrorMessage(res));\n exit_nicely(g_conn);\n }\n\nThanks,\nOle Gjerde\n\n",
"msg_date": "Sat, 8 May 1999 12:18:41 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Minor pg_dump buglet"
},
{
"msg_contents": "Sure, can you send us a patch:\n\n\n> Also, in getTables() in pg_dump.c there are at least a couple of these:\n> if (!res ||\n> PQresultStatus(res) != PGRES_COMMAND_OK)\n> {\n> fprintf(stderr, \"BEGIN command failed\\n\");\n> exit_nicely(g_conn);\n> }\n> \n> Shouldn't this be more like\n> if (!res ||\n> PQresultStatus(res) != PGRES_COMMAND_OK)\n> {\n> fprintf(stderr, \"BEGIN command failed(%s)\\n\", PGresultErrorMessage(res));\n> exit_nicely(g_conn);\n> }\n> or\n> if (!res)\n> {\n> fprintf(stderr, \"BEGIN command failed\\n\");\n> exit_nicely(g_conn);\n> } else if(PGresultStatus(res) != PGRES_COMMAND_OK) {\n> fprintf(stderr, \"BEGIN command failed. ERROR: %s\\n\", PGresultErrorMessage(res));\n> exit_nicely(g_conn);\n> }\n> \n> Thanks,\n> Ole Gjerde\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:20:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Minor pg_dump buglet"
}
] |
[
{
"msg_contents": "src/test/regress/sql/\n create_function_1.sql\n create_function_2.sql\n constraints.sql\n copy.sql\n misc.sql\n install_plpgsql.sql\n\nseem to be missing from the cvs tree? (Got case.sql today :)\n\nCheers,\n\nPatrick\n",
"msg_date": "Sat, 8 May 1999 18:41:11 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "sql regress files missing"
},
{
"msg_contents": "\"Patrick Welche\" <[email protected]> writes:\n> src/test/regress/sql/\n> create_function_1.sql\n> create_function_2.sql\n> constraints.sql\n> copy.sql\n> misc.sql\n> install_plpgsql.sql\n\n> seem to be missing from the cvs tree? (Got case.sql today :)\n\nThey're not part of the cvs tree --- those files are all generated\nfrom the prototypes in src/test/regress/input/ during 'make all'.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 May 1999 17:13:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] sql regress files missing "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Patrick Welche\" <[email protected]> writes:\n> > src/test/regress/sql/\n...\n> > seem to be missing from the cvs tree? (Got case.sql today :)\n> \n> They're not part of the cvs tree --- those files are all generated\n> from the prototypes in src/test/regress/input/ during 'make all'.\n\nAh - thank you - I just made runtest.\n\nCheers,\n\nPatrick\n",
"msg_date": "Sun, 9 May 1999 11:32:25 +0100 (BST)",
"msg_from": "\"Patrick Welche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] sql regress files missing"
}
] |
[
{
"msg_contents": "\nHey Leute!\nKann jemand von euch mir eine gute URL sagen, wo viel Hack - und Crack\nProgramme man herunterladen kann?!\nUnd wo kriegt man heute kostenlos Xenix oder Linux (mit GUI nat�rlich)?\n\n\n\nDanke im Voraus\n\n\n\n",
"msg_date": "Sat, 8 May 1999 22:58:01 +0200",
"msg_from": "\"GreatFreak\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Frage!"
}
] |
[
{
"msg_contents": "I've built the new cvsup-16.0 for linux, and posted packages at\n\n ftp://ftp.postgresql.org/pub/CVSup/\n\nThere are rpms for both static and dynamic executables for both\nclients and servers (as well as a python-based utility program). The\ndynamic executables require that a few Modula-3 rpms be installed\nalso, which I've posted in the same location.\n\nI've also packaged the static binaries in gzipped tar files for glibc2\nsystems which do not have rpm available.\n\nAll of the new binaries were built for glibc2 on a RH5.2 system. btw,\nthey all built very easily. Thanks John!\n\nThe rpms include some docs, startup files, and some example files.\nPlease let me know if you find any problems with the packaging...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 09 May 1999 06:45:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "New version of CVSup built for linux"
},
{
"msg_contents": "> I've built the new cvsup-16.0 for linux, and posted packages at\n> ftp://ftp.postgresql.org/pub/CVSup/\n> The rpms include some docs, startup files, and some example files.\n> Please let me know if you find any problems with the packaging...\n\nOK, I've decided that my original packaging sucks ;)\n\nI've posted new tar files which contain statically-built versions of\ncvsup and cvsupd (these are probably equivalent to the originals\nposted last night). I'm building only dynamically-linked versions for\nthe rpms, and have included all required files including the few\nModula-3 libraries which are required (as a separate package so the\nfull Modula-3 installation can be used if you want).\n\nI've removed my original cvsup-16.0-1 rpms, and put some 16.0-2 rpms\nin a \"beta\" subdirectory:\n ftp://ftp.postgresql.org/pub/CVSup/beta/\n\nWould someone be willing to test these? All of my systems already have\ncvsup built and Modula-3 installed, so it would be better to have this\ntested on a relatively clean system.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Sun, 09 May 1999 23:34:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] New version of CVSup built for linux"
}
] |
[
{
"msg_contents": "Latest 6.5 cvs:\n\nmake[3]: Entering directory /u/postgres/cvs/pgsql/src/interfaces/ecpg/preproc'\n/usr/bin/bison -y -d preproc.y\n\"preproc.y\", line 816: type redeclaration for table_list\nmake[3]: *** [preproc.c] Error 1\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 9 May 1999 10:56:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem compiling 6.5 cvs"
}
] |
[
{
"msg_contents": "> Here is a patch for version 6.4.2 that corrects several serious bugs\n> in Postgres' shared-memory-hashtable code. These problems are fairly\n> harmless until you get to more than 256 shared objects of a given type\n> (locks, buffers, etc) ... but then things get nasty. For more info see\n> the discussion on the pgsql-hackers list in late Feb. 99 (thread title\n> \"Anyone understand shared-memory space usage?\"). The equivalent\n> changes are already in the 6.5 source code, but not in 6.4.*.\n\nQuick question: when you do a query with a join why does the hash table\ncode need to use shared memory? Can't it do the join within its own memory\nspace?\n\n[I remember seeing a post recently with someone talking about how\ncurrently the code uses static sized buffers and so thats why we get the\nhash table out of memory errors but I didn't quite follow what was going\non]\n\nthanks,\nWayne\n\n------------------------------------------------------------------------------\nWayne Piekarski Tel: (08) 8221 5221\nResearch & Development Manager Fax: (08) 8221 5220\nSE Network Access Pty Ltd Mob: 0407 395 889\n222 Grote Street Email: [email protected]\nAdelaide SA 5000 WWW: http://www.senet.com.au\n",
"msg_date": "Sun, 9 May 1999 17:43:03 +0930 (CST)",
"msg_from": "Wayne Piekarski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 6.4.2 patch for shared-memory hashtable bugs"
}
] |
[
{
"msg_contents": "> Und wo kriegt man heute kostenlos Xenix oder Linux (mit GUI nat�rlich)?\n\nLinux kannst du dir auf fast allen Herstellerseiten runterladen, braucht\nhalt nur so seine Zeit.\n\n--\nGru�\n\nJohannes Weitzel\n\n\n",
"msg_date": "Sun, 09 May 1999 12:22:17 +0200",
"msg_from": "[email protected] (Johannes Weitzel)",
"msg_from_op": true,
"msg_subject": "Re: Frage!"
}
] |
[
{
"msg_contents": "Has this been fixed in 6.5 beta?\n\n> On Tue, Feb 02, 1999 at 07:33:48PM +0800, Vikrant Rathore wrote:\n> \n> > If your query is bigger than 8192 bytes then libpq simply truncates it\n> > without giving any warning\n> \n> I've read that in the docs, too (while back, didn't try it out). I have\n> to say, though, that I think this behaviouur is somewhat less than optimal.\n> Personally, I strongly prefer it if error conditions are raised by software\n> when it isn't able to carry out the requested function.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 9 May 1999 07:53:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] maximum attribute record."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Has this been fixed in 6.5 beta?\n>> On Tue, Feb 02, 1999 at 07:33:48PM +0800, Vikrant Rathore wrote:\n>> \n>>>> If your query is bigger than 8192 bytes then libpq simply truncates it\n>>>> without giving any warning\n>> \n>> I've read that in the docs, too (while back, didn't try it out).\n\nThat hasn't been true since at least 6.3.2:\n\n/* check to see if the query string is too long */\nif (strlen(query) > MAX_MESSAGE_LEN - 2)\n{\n\tsprintf(conn->errorMessage, \"PQsendQuery() -- query is too long. \"\n\t\t\t\"Maximum length is %d\\n\", MAX_MESSAGE_LEN - 2);\n\treturn 0;\n}\n\nPossibly the documentation needs updated, but all I can find is:\n\n: Caveats\n: \n: The query buffer is 8192 bytes long, and queries over that length will\n: be rejected. \n\nIs there another place that says the wrong thing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 May 1999 13:04:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [ADMIN] maximum attribute record. "
}
] |
[
{
"msg_contents": "It has been put on hold, I'll begin/continue working on it when we are done with \nthe beta :)\n\n-Ryan \n\n> Can I ask where we are with this.\n> \n> > Hello hackers...\n> > \n> > I've spent the last couple of evening tracing through the drop \ntable/sequence \n> > code trying to figure out the best to drop the sequence when the table is \n> > dropped.\n> > \n> > Here is what I am proposing to do. I just wanted to throw out my idea and \nget \n> > some feedback since I am just beginning to understand how the backend works.\n> > \n> > Take the following example:\n> > CREATE TABLE foo (i SERIAL, t text);\n> > \n> > This creates table foo, index foo_i_key, and the sequence foo_i_seq.\n> > \n> > The sequence ocuppies three of the system tables: pg_class, pg_attribute, \nand \n> > pg_attrdef. When the table gets dropped, the table foo and foo_i_key are \n> > removed. The default portion of the sequence is also removed from the \n> > pg_attrdef system table, because the attrelid matches the table's oid. \n> > \n> > I believe this is incorrect ... I think the attrelid should match the \nseqences \n> > oid instead of the table's oid to prevent the following error:\n> > \n> > ryan=> CREATE TABLE foo (i SERIAL, t text);\n> > NOTICE: CREATE TABLE will create implicit sequence foo_i_seq for SERIAL \ncolumn \n> > foo.i\n> > NOTICE: CREATE TABLE/UNIQUE will create implicit index foo_i_key for table \nfoo\n> > CREATE\n> > \n> > ryan=> \\d\n> > \n> > Database = ryan\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | rbrad | foo | table |\n> > | rbrad | foo_i_key | index |\n> > | rbrad | foo_i_seq | sequence |\n> > +------------------+----------------------------------+----------+\n> > \n> > ryan=> \\d foo;\n> > \n> > Table = foo\n> > \n+----------------------------------+----------------------------------+-------+\n> > | Field | Type | \nLength|\n> > \n+----------------------------------+----------------------------------+-------+\n> > | i | int4 not null default nextval('f | \n4 |\n> > | t | text | \nvar |\n> > \n+----------------------------------+----------------------------------+-------+\n> > Index: foo_i_key\n> > \n> > ryan=> drop sequence foo_i_seq;\n> > DROP\n> > \n> > ryan=> \\d\n> > \n> > Database = ryan\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | rbrad | foo | table |\n> > | rbrad | foo_i_key | index |\n> > +------------------+----------------------------------+----------+\n> > ryan=> \\d foo;\n> > \n> > Table = foo\n> > \n+----------------------------------+----------------------------------+-------+\n> > | Field | Type | \nLength|\n> > \n+----------------------------------+----------------------------------+-------+\n> > | i | int4 not null default nextval('f | \n4 |\n> > | t | text | \nvar |\n> > \n+----------------------------------+----------------------------------+-------+\n> > Index: foo_i_key\n> > \n> > ryan=> insert into foo (t) values ('blah');\n> > ERROR: foo_i_seq.nextval: sequence does not exist\n> > \n> > ryan=>\n> > \n> > This looks pretty easy to fix.\n> > \n> > Back to my origional point .. I think we need another system table to map \nthe \n> > sequence oid to the table's oid. I've noticed this done with the \ninheritance, \n> > indexes, etc ... but I don't see a pg_sequence table.\n> > \n> > I would be glad to try and finish this in the next couple of evenings if \nthis \n> > looks like the correct approach to the problem, otherwise could someone \npoint me \n> > in the right direction :)\n> > \n> > Thanks,\n> > -Ryan\n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n",
"msg_date": "Sun, 9 May 1999 18:42:35 -0600 (MDT)",
"msg_from": "Ryan Bradetich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Sequences...."
}
] |
[
{
"msg_contents": "\n\nIs this done?\n\n\n> The pages at http://postgresql.nextpath.com/docs/user/sql-select.htm and\n> http://postgresql.nextpath.com/docs/user/sql-insert.htm imply that SELECT\n> INTO and INSERT INTO <table> SELECT <query> will work with existing\n> tables but in fact it always creates the table and issues an error if\n> they already exist. What needs to be fixed here? The code or the docs?\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 00:53:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] INSERT INTO"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> Is this done?\n> > The pages at http://postgresql.nextpath.com/docs/user/sql-select.htm and\n> > http://postgresql.nextpath.com/docs/user/sql-insert.htm imply that SELECT\n> > INTO and INSERT INTO <table> SELECT <query> will work with existing\n> > tables but in fact it always creates the table and issues an error if\n> > they already exist. What needs to be fixed here? The code or the docs?\n\nI posted the correction. I think Tom Lane is putting it into the docs.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 10 May 1999 06:46:40 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO"
},
{
"msg_contents": "> I posted the correction. I think Tom Lane is putting it into the docs.\n\nActually, the other tgl has some patches :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 10 May 1999 15:26:52 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO"
},
{
"msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> I posted the correction. I think Tom Lane is putting it into the docs.\n\nNot me --- Lockhart.\n\nI know it's confusing having two guys with the initials tgl on the\nsame project ;-).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 12:49:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO "
},
{
"msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > I posted the correction. I think Tom Lane is putting it into the docs.\n> \n> Not me --- Lockhart.\n> \n> I know it's confusing having two guys with the initials tgl on the\n> same project ;-).\n\nSheesh! We'll just have to get together somewhere all together for a beer\nand straighten all this out. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 10 May 1999 13:54:52 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO"
}
] |
[
{
"msg_contents": "\n\nCan someone comment on this? It looks like part of this is applied, but\nnot all of it.\n\n\n> Included are patches for 4/1 snapshot, submitted by Masaaki Sakaida.\n> \n> o Allow ecpg handle a floating point constant having more than 10 places\n> o Fix ecpg runtime library memory leak\n> --\n> Tatsuo Ishii\n> ------------------------------------------------------------\n> *** src/interfaces/ecpg/preproc/pgc.l.orig\tThu Apr 1 17:41:04 1999\n> --- src/interfaces/ecpg/preproc/pgc.l\tThu Apr 1 17:41:10 1999\n> ***************\n> *** 461,466 ****\n> --- 461,475 ----\n> \t\t\t\t\t}\n> \t\t\t\t\treturn ICONST;\n> \t\t\t\t}\n> + <C>{real}\t\t\t{\n> + \t\t\t\t\tchar* endptr;\n> + \n> + \t\t\t\t\terrno = 0;\n> + \t\t\t\t\tyylval.dval = strtod((char *)yytext,&endptr);\n> + \t\t\t\t\tif (*endptr != '\\0' || errno == ERANGE)\n> + \t\t\t\t\t\tyyerror(\"ERROR: Bad float input\");\n> + \t\t\t\t\treturn FCONST;\n> + \t\t\t\t}\n> <SQL>:{identifier}((\"->\"|\\.){identifier})*\t{\n> \t\t\t\t\tyylval.str = mm_strdup((char*)yytext+1);\n> \t\t\t\t\treturn(CVARIABLE);\n> \n> ------------------------------------------------------------\n> \n> \n> ------------------------------------------------------------\n> *** src/interfaces/ecpg/lib/ecpglib.c.orig\tThu Apr 1 17:10:52 1999\n> --- src/interfaces/ecpg/lib/ecpglib.c\tThu Apr 1 17:22:12 1999\n> ***************\n> *** 370,375 ****\n> --- 370,403 ----\n> \treturn (true);\n> }\n> \n> + static void\n> + free_variable(struct variable *var)\n> + {\n> + \tstruct variable\t*var_next;\n> + \n> + \tif( var == (struct variable *)NULL ) \n> + \t\treturn; \n> + \tvar_next = var->next;\n> + \tfree(var);\n> + \n> + \twhile(var_next)\n> + \t{\n> + \t\tvar = var_next;\n> + \t\tvar_next = var->next;\n> + \t\tfree(var);\n> + \t}\n> + }\n> + \n> + static void\n> + free_statement(struct statement *stmt)\n> + {\n> + \tif( stmt == (struct statement *)NULL ) \n> + \t\treturn;\n> + \tfree_variable(stmt->inlist);\n> + \tfree_variable(stmt->outlist);\n> + \tfree(stmt);\n> + }\n> + \n> static char *\n> next_insert(char *text)\n> {\n> ***************\n> *** 981,987 ****\n> \t\t\t\t\tstatus = false;\n> \t\t\t\t}\n> \n> - \t\t\t\tPQclear(results);\n> \t\t\t\tbreak;\n> \t\t\tcase PGRES_EMPTY_QUERY:\n> \t\t\t\t/* do nothing */\n> --- 1009,1014 ----\n> ***************\n> *** 1017,1022 ****\n> --- 1044,1050 ----\n> \t\t\t\tstatus = false;\n> \t\t\t\tbreak;\n> \t\t}\n> + \t\tPQclear(results);\n> \t}\n> \n> \t/* check for asynchronous returns */\n> ***************\n> *** 1037,1042 ****\n> --- 1065,1071 ----\n> \tva_list\t\targs;\n> \tstruct statement *stmt;\n> \tstruct connection *con = get_connection(connection_name);\n> + \tbool\t\tstatus;\n> \n> \tif (con == NULL)\n> \t{\n> ***************\n> *** 1057,1063 ****\n> \t\treturn false;\n> \t}\n> \n> ! \treturn (ECPGexecute(stmt));\n> }\n> \n> bool\n> --- 1086,1094 ----\n> \t\treturn false;\n> \t}\n> \n> ! \tstatus = ECPGexecute(stmt);\n> ! \tfree_statement(stmt);\n> ! \treturn (status);\n> }\n> \n> bool\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 01:00:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ecpg fixes"
},
{
"msg_contents": "On Mon, May 10, 1999 at 01:00:45AM -0400, Bruce Momjian wrote:\n> \n> \n> Can someone comment on this? It looks like part of this is applied, but\n> not all of it.\n> ...\n \nIt should be all committed. That is I changed pgc.l a little bit so maybe\nthat's the reason. If something's really missing, please tell me.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n",
"msg_date": "Tue, 11 May 1999 10:23:29 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ecpg fixes"
},
{
"msg_contents": "> On Mon, May 10, 1999 at 01:00:45AM -0400, Bruce Momjian wrote:\n> > \n> > \n> > Can someone comment on this? It looks like part of this is applied, but\n> > not all of it.\n> > ...\n> \n> It should be all committed. That is I changed pgc.l a little bit so maybe\n> that's the reason. If something's really missing, please tell me.\n\nMichael, there was a function that looks like it fixed some memory leak.\nDo you have the patch there so you can package up one that matches the\ncurrent sources? I am stumped on what it was trying to do.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 01:14:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] ecpg fixes"
}
] |
[
{
"msg_contents": "\nWhere did we leave this?\n\n\n> Given\n> \tcreate table t1 (name text, value float8);\n> \n> this works:\n> \tSELECT name, value FROM t1 GROUP BY name, value\n> \tHAVING value/AVG(value) > 0.75;\n> \n> but this doesn't:\n> \tSELECT name AS tempname, value FROM t1 GROUP BY name, value\n> \tHAVING value/AVG(value) > 0.75;\n> \tERROR: Illegal use of aggregates or non-group column in target list\n> \n> Curiously, it's fine if the HAVING clause is omitted ... since name is\n> not mentioned in the HAVING clause, I don't see the connection ...\n> \n> 6.4.2 and current sources show the same behavior.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 01:03:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Parser bug: alias is a \"non-group column\"?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Where did we leave this?\n\nIt's still broken. I have the following notes in my todo list:\n\nInconsistent handling of attribute renaming:\n\tcreate table t1 (name text, value float8);\n\tselect name as n1 from t1 where n1 = 'one' ;\n\tERROR: attribute 'n1' not found\nbut\n\tSELECT name AS tempname, value FROM t1 GROUP BY name, value ;\n\tSELECT name AS tempname, value FROM t1 GROUP BY tempname, value ;\nboth work. Even stranger,\n\tSELECT name AS tempname, value FROM t1 GROUP BY name, value\n\tHAVING value/AVG(value) > 0.75;\n\tERROR: Illegal use of aggregates or non-group column in target list\n(it thinks tempname is not in the GROUP BY list) but\n\tSELECT name AS tempname, value FROM t1 GROUP BY tempname, value\n\tHAVING value/AVG(value) > 0.75;\nworks! (6.4.2 has same behavior for all cases...)\n\n\nLooks like the parser has some problems in the presence of column\nrenaming. Since 6.4.2 has the same bug I doubt this qualifies as a\nshowstopper for 6.5; I have other todo items that I consider higher\npriority. If someone else wants to dig into this, be my guest...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 12:38:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Parser bug: alias is a \"non-group column\"? "
}
] |
[
{
"msg_contents": "> Given\n> create table t1 (name text, value float8);\n> \n> this fails:\n> SELECT name, value FROM t1 as touter WHERE\n> (value/(SELECT AVG(value) FROM t1 WHERE name = touter.name)) > 0.75;\n> ERROR: parser: '/' must return 'bool' to be used with subquery\n\n\nAren't you really saying WHERE col / (subselect). That doesn't return\nbool, so the message seems correct.\n\nWe don't allow subselects in target lists, or inside expressions.\n\n> \n> The code in parse_expr.c that produces this message claims to be\n> enforcing that \"sub-selects can only be used in WHERE clauses\".\n> Either the comment is inaccurate or the test is too restrictive.\n> If the test is correct then I think the error message is unhelpful.\n> Anybody understand this code well enough to know what's really going on?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 01:05:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] \"op must return bool to be used with subquery\"?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Given\n>> create table t1 (name text, value float8);\n>> \n>> this fails:\n>> SELECT name, value FROM t1 as touter WHERE\n>> (value/(SELECT AVG(value) FROM t1 WHERE name = touter.name)) > 0.75;\n>> ERROR: parser: '/' must return 'bool' to be used with subquery\n\n> Aren't you really saying WHERE col / (subselect). That doesn't return\n> bool, so the message seems correct.\n\nNo, look again: the result of the subselect is being used as an operand\nwithin the WHERE clause:\n\tWHERE (value/(SUBSELECT)) > 0.75;\n\nIf the / were the toplevel operator in the WHERE then the message would\nmake sense, because the WHERE clause as a whole must yield boolean.\nBut that doesn't mean that the operator immediately above the subselect\nmust yield boolean.\n\nBesides, I already fixed this ;-)\n\n> We don't allow subselects in target lists, or inside expressions.\n\nWe don't allow 'em in target lists, I think (anyone understand why not?)\nbut they work fine inside expressions in WHERE or HAVING.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 12:44:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] \"op must return bool to be used with subquery\"? "
}
] |
[
{
"msg_contents": "[snip]\n\n[ BTW, ImageViewer seems to make calls to following set of LOs *twice*\nto display an image. Why?\n\nlo_open\nlo_tell\nlo_lseek\nlo_lseek\nlo_read\nlo_close\n]\n\npeter: Hmmm, I'll have to check this, but the first one may be the\napplication loading the image, and the second when AWT forces a reload.\nI'm puzzled why its tell, lseek, lseek when I thought it should have\nbeen lseek, tell, lseek as that's ImageViewer finding out the size of\nthe blob.\n\nPossible solutions might be:\n\n(1) do a context switching in lo_read/lo_write\n\n(2) ask apps not to make LO calls between transactions\n\n(3) close LOs fd at commit\n\n(2) is the current situation but not very confortable for us. Also for\na certain app this is not a solution as I mentioned above. (3) seems\nreasonable but people might be surprised to find their existing apps\nwon't run any more. Moreover, changings might not be trivial and it\nmake me nervous since we don't have enough time before 6.5 is\nout. With (1) modifications would be minimum, and we can keep the\nbackward compatibility for apps. So my conclusion is that (1) is the\nbest. If there's no objection, I will commit the change for (1).\n---\nTatsuo Ishii\n\nPeter: I've modified my local copy of ImageViewer to use transactions,\nand should be committing it later today.\n\n(3) Wouldn't break JDBC, but in my mind (1) looks the safest.\n\nPeter\n\n--\nPeter T Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as the\nofficial words of Maidstone Borough Council\n",
"msg_date": "Mon, 10 May 1999 08:15:45 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] Re: SIGBUS in AllocSetAlloc & jdbc "
}
] |
[
{
"msg_contents": "GreatFreak wrote:\n\n> Hey Leute!\n> Kann jemand von euch mir eine gute URL sagen, wo viel Hack - und Crack\n> Programme man herunterladen kann?!\n> Und wo kriegt man heute kostenlos Xenix oder Linux (mit GUI nat�rlich)?\n>\n> Danke im Voraus\n\nOk, luckilly I can speak some german, so I can answer\nthis. Let's keep this all in English ok ?\n\nYou asked for a URL for hacking and cracking stuff.\nI would suggest : http://neworder.box.sk\n\nSecond questions was where you could get Linux / Xenix\nfor free. Well .. since Linux is open source it's _always_\nfree. I suggest ya download it somewhere (if you got cable\nmodem or such). Otherwise just buy a CD (only $5 or so).\n\n--\nB10m\n\n\"Hearken sons of the glorious empire\n Here we stand upon de Field of Blood\n Though on this day we may die,\n our legend shall live forever ...\"\n\n\n",
"msg_date": "Mon, 10 May 1999 09:46:40 +0200",
"msg_from": "\"M.J. Blom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Frage!"
}
] |
[
{
"msg_contents": "Sorry for the delay.\n\n> Tatsuo Ishii <[email protected]> writes:\n> >>>> o It is repoted that NetBSD/m68k has bee broken since you put the\n> >>>> alignment stuff into configure, I'm not sure though.\n> \n> > More info. Install and initdb are ok. But destroydb and createdb never\n> > works. NetBSD/macppc 1.4_ALPHA has no problem.\n> \n> Hmm. What numbers is configure producing for the various ALIGNOF\n> values? (Look in config.h.)\n\n#define ALIGNOF_SHORT 2\n#define ALIGNOF_INT 2\n#define ALIGNOF_LONG 2\n#define ALIGNOF_LONG_LONG_INT 2\n#define ALIGNOF_DOUBLE 2\n#define MAXIMUM_ALIGNOF 2 \n\n> What exactly happens when you try a\n> createdb? Can you connect to the template1 database and do SQL stuff\n> after initdb, without having done a createdb?\n\n\"psql template1\" gets coredumped. But 6.4.2 clients can connect to the\nbackend. Seems something going wrong with libpq stuffs.\n---\nTatsuo Ishii\n",
"msg_date": "Mon, 10 May 1999 17:26:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta2? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>>>>>>> o It is repoted that NetBSD/m68k has bee broken since you put the\n>>>>>>> alignment stuff into configure, I'm not sure though.\n>>\n>> Hmm. What numbers is configure producing for the various ALIGNOF\n>> values? (Look in config.h.)\n\n> #define ALIGNOF_SHORT 2\n> #define ALIGNOF_INT 2\n> #define ALIGNOF_LONG 2\n> #define ALIGNOF_LONG_LONG_INT 2\n> #define ALIGNOF_DOUBLE 2\n> #define MAXIMUM_ALIGNOF 2 \n\nOK, I guess that's reasonable for m68k hardware. I wonder whether\nanything is assuming that MAXALIGN is at least 4...\n\n>> What exactly happens when you try a\n>> createdb? Can you connect to the template1 database and do SQL stuff\n>> after initdb, without having done a createdb?\n\n> \"psql template1\" gets coredumped. But 6.4.2 clients can connect to the\n> backend. Seems something going wrong with libpq stuffs.\n\nDo you mean that psql itself (not the backend) is coredumping? Can you\nprovide a backtrace from the corefile?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 12:46:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta2? "
},
{
"msg_contents": "> > #define ALIGNOF_SHORT 2\n> > #define ALIGNOF_INT 2\n> > #define ALIGNOF_LONG 2\n> > #define ALIGNOF_LONG_LONG_INT 2\n> > #define ALIGNOF_DOUBLE 2\n> > #define MAXIMUM_ALIGNOF 2 \n> \n> OK, I guess that's reasonable for m68k hardware. I wonder whether\n> anything is assuming that MAXALIGN is at least 4...\n> \n> >> What exactly happens when you try a\n> >> createdb? Can you connect to the template1 database and do SQL stuff\n> >> after initdb, without having done a createdb?\n> \n> > \"psql template1\" gets coredumped. But 6.4.2 clients can connect to the\n> > backend. Seems something going wrong with libpq stuffs.\n> \n> Do you mean that psql itself (not the backend) is coredumping? Can you\n> provide a backtrace from the corefile?\n\nYes, but I do not have the backtrace handy since that was reported\nfrom a guy, not by me.\n\nBTW, from interfaces/libpq/fe-exec.c:\n\n * Requirements for correct function are:\n * PGRESULT_ALIGN_BOUNDARY >= sizeof(pointer)\n *\t\tto ensure the initial pointer in a block is not overwritten.\n[snip]\n#define PGRESULT_ALIGN_BOUNDARY\t\tMAXIMUM_ALIGNOF /* from configure */\n\nI wonder there seems to be a problem if MAXIMUM_ALIGNOF == 2?\n---\nTatsuo Ishii\n",
"msg_date": "Tue, 11 May 1999 09:54:13 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta2? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> OK, I guess that's reasonable for m68k hardware. I wonder whether\n>> anything is assuming that MAXALIGN is at least 4...\n\n> BTW, from interfaces/libpq/fe-exec.c:\n> * Requirements for correct function are:\n> * PGRESULT_ALIGN_BOUNDARY >= sizeof(pointer)\n> *\t\tto ensure the initial pointer in a block is not overwritten.\n> [snip]\n> #define PGRESULT_ALIGN_BOUNDARY\t\tMAXIMUM_ALIGNOF /* from configure */\n\nI think you've spotted the problem, all right. I'll see what I can do\nabout it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 10:20:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta2? "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I wonder there seems to be a problem if MAXIMUM_ALIGNOF == 2?\n\nFixed, I think. Please ask your friend to try again with latest\nsources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 00:42:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Beta2? "
},
{
"msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> I wonder there seems to be a problem if MAXIMUM_ALIGNOF == 2?\n>\n>Fixed, I think. Please ask your friend to try again with latest\n>sources.\n\nThanks! I'll let my friend know.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 12 May 1999 15:07:23 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Beta2? "
}
] |
[
{
"msg_contents": "\nI am start programming on Postgres and looking for how to return a set of\ntuples for a function;\n\nAs see on man pages of \"create_function\", there is an example of function\nhobbies which return \"set of HOBBIES\". \n\nexample:\n create function hobbies (EMP) returns set of HOBBIES\n as 'select (HOBBIES.all) from HOBBIES\n where $1.name = HOBBIES.person'\n language 'sql'\nresult:\n ERROR: parser: parse error at or near \"set\".\n\nAn on last line of the man pages said \"C functions cannot return a set of\nvalues.\" Is there this fix on lastest version 6.5beta1?\n\nWChart.\n\n\n",
"msg_date": "Mon, 10 May 1999 15:50:25 +0700 (GMT)",
"msg_from": "Werachart Jantarataeme <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can a function return tuples "
}
] |
[
{
"msg_contents": "\nI have 2 table say c1,c2 (same attributes) and then I would like to \n\nCREATE VIEW c AS \n SELECT * FROM c1 \n UNION \n SELECT * FROM c2;\n\nBut the system not accept. (not implement)\nThen I change to use \nCRATE TABLE c (<** attributes are same as c1,c2 **>)\nCREATE RULE r1 AS ON SELECT to c \n DO INSTEAD \n SELECT * FROM c2 \n UNION \n SELECT * FROM c1;\n\nTbe system accept this rule but when I process 'SELECT * FROM c1;', the\nresult is not sastified; only tuples on c2 are display.\n\nIs there other way to get result like above. One I've thought in mind is:-\nCREATE RULE r1 AS ON SELECT to c\n DO INSTEAD\n SELECT funct_UNION(c1,c2);\nBut I have no idea how to do kinds of the function that return set of\ntuple\n\nWChart.\n\n\n",
"msg_date": "Mon, 10 May 1999 15:51:16 +0700 (GMT)",
"msg_from": "Werachart Jantarataeme <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can I use UNION in VIEW/RULE? "
}
] |
[
{
"msg_contents": "\n> Offhand the obvious try doesn't crash it:\n> \n> regression=> select foo(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18);\n> ERROR: No such function 'foo' with the specified attributes\n> \nSQL functions do allow more than 8 parameters.\n\nAndreas \n",
"msg_date": "Mon, 10 May 1999 11:46:48 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Number of parameters in a sql function "
}
] |
[
{
"msg_contents": "Nueva Lista de SQL y Motores de Base de Datos en\nhttp://www.inmotion.cl/majordomo/sql/\n\n\n",
"msg_date": "10 May 1999 04:05:02 -0800",
"msg_from": "\"J M Doren\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQl Mail LIst (Spanish)"
}
] |
[
{
"msg_contents": "\n\nTom, is this resolved?\n\n\n> I have just finished characterizing a nasty little bug in table\n> destruction. Do this:\n> \tcreate table x (i int4);\n> \tcreate index xindex on x (i);\n> \tdrop table x;\n> The backend will now have one more open file than it had before\n> (use lsof or similar tool to check). Repeat enough times, and\n> the backend crashes for lack of file descriptors.\n> \n> It appears that the file that is left open is the index relation.\n> The index is *not* open at the end of the CREATE INDEX command;\n> apparently, DROP TABLE opens the index for some reason, and then\n> forgets to close the file descriptor when it destroys the index.\n> \n> Create more than one index, and they're *all* held open after DROP.\n> \n> I see the same behavior in both 6.4.2 and 6.5-current.\n> \n> Does anyone have a good idea where to look for the resource leak?\n> I've never looked at table creation/destruction...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:01:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] DROP TABLE leaks file descriptors"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, is this resolved?\n\nIt is not. The most visible symptom is masked as of last weekend:\nsince psort.c now uses virtual file descriptors rather than direct stdio\ncalls, the backend will not coredump for lack of file descriptors in\nCREATE INDEX. But it is still true that DROP TABLE leaves a virtual\nfile descriptor open for each index on the dropped table, and that's\na bug in my book.\n\nI don't know much about table creation/destruction, so was hoping to\nget someone else to look at it.\n\n\t\t\tregards, tom lane\n\n>> I have just finished characterizing a nasty little bug in table\n>> destruction. Do this:\n>> create table x (i int4);\n>> create index xindex on x (i);\n>> drop table x;\n>> The backend will now have one more open file than it had before\n>> (use lsof or similar tool to check). Repeat enough times, and\n>> the backend crashes for lack of file descriptors.\n>> \n>> It appears that the file that is left open is the index relation.\n>> The index is *not* open at the end of the CREATE INDEX command;\n>> apparently, DROP TABLE opens the index for some reason, and then\n>> forgets to close the file descriptor when it destroys the index.\n>> \n>> Create more than one index, and they're *all* held open after DROP.\n>> \n>> I see the same behavior in both 6.4.2 and 6.5-current.\n>> \n>> Does anyone have a good idea where to look for the resource leak?\n>> I've never looked at table creation/destruction...\n",
"msg_date": "Mon, 10 May 1999 13:18:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] DROP TABLE leaks file descriptors "
}
] |
[
{
"msg_contents": "\nApplied. Here is a new version of refint for us to use.\n\n\n> Hi Bruce .\n> \n> I send you a attach of my modified refint.c that\n> works with a new policy in cascade mode .\n> \n> Please Read README.MAX .\n> I do not know if you are the author of refint.c ,\n> but if not please tell me who is .\n> \n> \n> Thank you ( excuse me for my bad english) .\n> Massimo Lambertini [email protected]\n\n[application/x-gzip is not supported, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:08:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NEW REFINT.C"
}
] |
[
{
"msg_contents": "\nI have applied a new refint from Massimo. Can people check that and\nmake the needed fixes to it and trigger.sql?\n\n\n> \n> > It seems that virtually all platforms fail with misc and trigger\n> > regression tests.\n> > \n> The function check_primary_key in contrib/spi/refint.c was changed to take\n> an \n> additional obligatory argument automatic or dependent. This change was not\n> made in test/regress/sql/trigger.sql.\n> The error message in check_primary_key is in the wrong place thus \n> the misleading error msg: even number of arguments expected.\n> \n> I supply a patch so the error shows, but I don't supply a patch for\n> trigger.sql,\n> since I think possibly the code should be reviewed and changed to use a \n> default argument for action: dependent.\n> \n> \t <<refint.patch>> \n> > Are these bad news for 6.5beta? Or should we just update expected\n> > files?\n> > \n> No :-) No !\n> \n> Andreas\n> \n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:21:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: [HACKERS] misc and triggers regression tests failed on 6.5bet\n\ta1"
}
] |
[
{
"msg_contents": "> Hey,\n> Got a couple of problems that could hopefully be fixed before 6.5 gets\n> released. This is straight from CVS april 5th.\n> \n> Prob #1: \n> DROP TABLE <table> doesn't removed \"extended\" files.\n> i.e., if you have 2GB db on Linux(i386), it will only delete the main\n> file and not the .1, .2, etc table files.\n\nAdded to TODO list.\n\n\n> \n> Prob #2:\n> While running postmaster at the command line like this:\n> /home/postgres/bin/postmaster -B 1024 -D/home/postgres/data -d 9 -o \"-S\n> 4096 -s -d 9 -A\"\n> \n> the current backend for the following CREATE TABLE would\n> die(consistently):\n> CREATE TABLE oletest (\n> id serial,\n> number int,\n> string varchar(255)\n> );\n> \n\nNot sure how to address this.\n\n\n\n> This was the only query it died on however. I have made huge indexes and\n> regular selects while running it the same way. There was only one\n> postgres backend running at the time.\n> \n> Thanks,\n> Ole Gjerde\n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:37:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "> > Prob #1: \n> > DROP TABLE <table> doesn't removed \"extended\" files.\n> > i.e., if you have 2GB db on Linux(i386), it will only delete the main\n> > file and not the .1, .2, etc table files.\n\nI have looked at this.\nI made it so it rolled over files at 1MB. My table ended up with 120\nsegments, and my indexes had 3(Yes, it DOES work!).\nDROP TABLE removed ALL segments from the table, but only the main index\nsegment.\n\nSo it looks like removing the table itself is using mdunlink in md.c,\nwhile removing indexes uses FileNameUnlink() which only unlinks 1 file.\nAs far as I can tell, calling FileNameUnlink() and mdunlink() is basically\nthe same, except mdunlink() deletes any extra segments.\n\nI've done some testing and it seems to work. It also passes regression\ntests(except float8, geometry and rules, but that's normal).\n\nIf this patch is right, this fixes all known multi-segment problems on\nLinux.\n\nOle Gjerde\n\nPatch for index drop:\n--- src/backend/catalog/index.c\t1999/05/10 00:44:55\t1.71\n+++ src/backend/catalog/index.c\t1999/05/15 06:42:27\n@@ -1187,7 +1187,7 @@\n \t */\n \tReleaseRelationBuffers(userindexRelation);\n \n-\tif (FileNameUnlink(relpath(userindexRelation->rd_rel->relname.data)) < 0)\n+\tif (mdunlink(userindexRelation) != SM_SUCCESS)\n \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n \n \tindex_close(userindexRelation);\n\n",
"msg_date": "Sat, 15 May 1999 01:47:21 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "Applied. You have the correct patch. All other references to\nFileNameUnlink look correct, but the index.c one is clearly wrong. \nThanks.\n\nI believe we still have vacuuming of multi-segment tables as a problem.\n\n\n> > > Prob #1: \n> > > DROP TABLE <table> doesn't removed \"extended\" files.\n> > > i.e., if you have 2GB db on Linux(i386), it will only delete the main\n> > > file and not the .1, .2, etc table files.\n> \n> I have looked at this.\n> I made it so it rolled over files at 1MB. My table ended up with 120\n> segments, and my indexes had 3(Yes, it DOES work!).\n> DROP TABLE removed ALL segments from the table, but only the main index\n> segment.\n> \n> So it looks like removing the table itself is using mdunlink in md.c,\n> while removing indexes uses FileNameUnlink() which only unlinks 1 file.\n> As far as I can tell, calling FileNameUnlink() and mdunlink() is basically\n> the same, except mdunlink() deletes any extra segments.\n> \n> I've done some testing and it seems to work. It also passes regression\n> tests(except float8, geometry and rules, but that's normal).\n> \n> If this patch is right, this fixes all known multi-segment problems on\n> Linux.\n> \n> Ole Gjerde\n> \n> Patch for index drop:\n> --- src/backend/catalog/index.c\t1999/05/10 00:44:55\t1.71\n> +++ src/backend/catalog/index.c\t1999/05/15 06:42:27\n> @@ -1187,7 +1187,7 @@\n> \t */\n> \tReleaseRelationBuffers(userindexRelation);\n> \n> -\tif (FileNameUnlink(relpath(userindexRelation->rd_rel->relname.data)) < 0)\n> +\tif (mdunlink(userindexRelation) != SM_SUCCESS)\n> \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n> \n> \tindex_close(userindexRelation);\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 18:31:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "I believe also we have:\n\n\tDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\n\nas an open item. Do you see these problems there?\n\n> > > Prob #1: \n> > > DROP TABLE <table> doesn't removed \"extended\" files.\n> > > i.e., if you have 2GB db on Linux(i386), it will only delete the main\n> > > file and not the .1, .2, etc table files.\n> \n> I have looked at this.\n> I made it so it rolled over files at 1MB. My table ended up with 120\n> segments, and my indexes had 3(Yes, it DOES work!).\n> DROP TABLE removed ALL segments from the table, but only the main index\n> segment.\n> \n> So it looks like removing the table itself is using mdunlink in md.c,\n> while removing indexes uses FileNameUnlink() which only unlinks 1 file.\n> As far as I can tell, calling FileNameUnlink() and mdunlink() is basically\n> the same, except mdunlink() deletes any extra segments.\n> \n> I've done some testing and it seems to work. It also passes regression\n> tests(except float8, geometry and rules, but that's normal).\n> \n> If this patch is right, this fixes all known multi-segment problems on\n> Linux.\n> \n> Ole Gjerde\n> \n> Patch for index drop:\n> --- src/backend/catalog/index.c\t1999/05/10 00:44:55\t1.71\n> +++ src/backend/catalog/index.c\t1999/05/15 06:42:27\n> @@ -1187,7 +1187,7 @@\n> \t */\n> \tReleaseRelationBuffers(userindexRelation);\n> \n> -\tif (FileNameUnlink(relpath(userindexRelation->rd_rel->relname.data)) < 0)\n> +\tif (mdunlink(userindexRelation) != SM_SUCCESS)\n> \t\telog(ERROR, \"amdestroyr: unlink: %m\");\n> \n> \tindex_close(userindexRelation);\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 May 1999 18:33:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "On Sat, 15 May 1999, Bruce Momjian wrote:\n> I believe also we have:\n> \tDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\n> as an open item. Do you see these problems there?\n\nDROP TABLE worked, ALTER TABLE didn't.\n\nCREATE TABLE\nDROP TABLE\nCREATE INDEX\nDROP INDEX\nALTER TABLE old RENAME TO new\n\nAll works on linux now by my tests and regression(with patch below).\n\nPerhaps a mdrename() should be created? Or is something like this good\nenough?\n\nAnother thing. Should error messages from file related(or all system\ncalls) use strerror() to print out errno?\n\nOle Gjerde\n\n--- src/backend/commands/rename.c\t1999/05/10 00:44:59\t1.23\n+++ src/backend/commands/rename.c\t1999/05/15 23:42:49\n@@ -201,10 +201,13 @@\n void\n renamerel(char *oldrelname, char *newrelname)\n {\n+\tint\t\ti;\n \tRelation\trelrelation;\t/* for RELATION relation */\n \tHeapTuple\toldreltup;\n \tchar\t\toldpath[MAXPGPATH],\n-\t\t\t\tnewpath[MAXPGPATH];\n+\t\t\t\tnewpath[MAXPGPATH],\n+\t\t\t\ttoldpath[MAXPGPATH + 10],\n+\t\t\t\ttnewpath[MAXPGPATH + 10];\n \tRelation\tirelations[Num_pg_class_indices];\n \n \tif (!allowSystemTableMods && IsSystemRelationName(oldrelname))\n@@ -229,6 +232,14 @@\n \tstrcpy(newpath, relpath(newrelname));\n \tif (rename(oldpath, newpath) < 0)\n \t\telog(ERROR, \"renamerel: unable to rename file: %s\", oldpath);\n+\n+\tfor (i = 1;; i++)\n+\t{\n+\t\tsprintf(toldpath, \"%s.%d\", oldpath, i);\n+\t\tsprintf(tnewpath, \"%s.%d\", newpath, i);\n+\t\tif(rename(toldpath, tnewpath) < 0)\n+\t\t\tbreak;\n+\t}\n \n \tStrNCpy((((Form_pg_class) GETSTRUCT(oldreltup))->relname.data),\n \t\t\tnewrelname, NAMEDATALEN);\n\n",
"msg_date": "Sat, 15 May 1999 18:38:17 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "[email protected] wrote (a couple weeks ago):\n> While running postmaster at the command line like this:\n> /home/postgres/bin/postmaster -B 1024 -D/home/postgres/data -d 9 -o \"-S\n> 4096 -s -d 9 -A\"\n\n> the current backend for the following CREATE TABLE would\n> die(consistently):\n> CREATE TABLE oletest (\n> id serial,\n> number int,\n> string varchar(255)\n> );\n\nAre you still seeing this with current sources? I'm not able to\nreplicate it here...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 May 1999 21:15:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo "
},
{
"msg_contents": "On Sun, 16 May 1999, Tom Lane wrote:\n> [email protected] wrote (a couple weeks ago):\n> > While running postmaster at the command line like this:\n> > /home/postgres/bin/postmaster -B 1024 -D/home/postgres/data -d 9 -o \"-S\n> > 4096 -s -d 9 -A\"\n> Are you still seeing this with current sources? I'm not able to\n> replicate it here...\n\nYep..\nIt appears to die while making the index for serial.\nPostmaster keeps running, but the current backend dies in pqReadData().\n\ngdb and postmaster output below.\n\nOle Gjerde\n\ngdb of core:\n#0 0x4013a30a in _IO_default_xsputn (f=0xbf8006f4, data=0x81377e0, n=20)\n at genops.c:382\n#1 0x40129980 in _IO_vfprintf (s=0xbf8006f4, \n format=0x81377e0 \" COLUMNDEF :colname %s :typename \", ap=0xbf800c08)\n at vfprintf.c:1048\n#2 0x40137d16 in _IO_vsnprintf (string=0xbf8007f8 \"\", maxlen=1024, \n format=0x81377e0 \" COLUMNDEF :colname %s :typename \", args=0xbf800c08)\n at vsnprintf.c:129\n#3 0x809ccfb in appendStringInfo ()\n#4 0x80b637b in _outColumnDef ()\n#5 0x80b8107 in _outNode ()\n#6 0x80b7d79 in _outNode ()\n#7 0x80b7cab in _outConstraint ()\n#8 0x80b84b7 in _outNode ()\n#9 0x80b7d79 in _outNode ()\n#10 0x80b63bb in _outColumnDef ()\n#11 0x80b8107 in _outNode ()\n#12 0x80b7d79 in _outNode ()\nAnd this keeps going and going and going..\n\n---------------------------------------------\npostmaster log:\nStartTransactionCommand\nquery: create table oletest (\nid serial,\nnumber int,\nstring varchar(255)\n);\nNOTICE: CREATE TABLE will create implicit sequence 'oletest_id_seq' for SERIAL column 'oletest.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'oletest_id_key' for table 'oletest'\nparser outputs:\n{ QUERY \n :command 5 \n :utility ? \n :resultRelation 0 \n :into <> \n :isPortal false \n :isBinary false \n :isTemp false \n :unionall false \n :unique <> \n :sortClause <> \n :rtable <> \n :targetlist <> \n :qual <> \n :groupClause <> \n :havingQual <> \n :hasAggs false \n :hasSubLinks false \n :unionClause <> \n :intersectClause <> \n :limitOffset <> \n :limitCount <> \n :rowMark <>\n }\nparser outputs:\nbin/postmaster: reaping dead processes...\nbin/postmaster: CleanupProc: pid 593 exited with status 139\nbin/postmaster: CleanupProc: reinitializing shared memory and semaphores\nshmem_exit(0) [#0]\nbinding ShmemCreate(key=52e325, size=9343632)\n\n",
"msg_date": "Sun, 16 May 1999 21:35:45 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo "
},
{
"msg_contents": "Ole Gjerde <[email protected]> writes:\n> gdb of core:\n> #0 0x4013a30a in _IO_default_xsputn (f=0xbf8006f4, data=0x81377e0, n=20)\n> at genops.c:382\n> #1 0x40129980 in _IO_vfprintf (s=0xbf8006f4, \n> format=0x81377e0 \" COLUMNDEF :colname %s :typename \", ap=0xbf800c08)\n> at vfprintf.c:1048\n> #2 0x40137d16 in _IO_vsnprintf (string=0xbf8007f8 \"\", maxlen=1024, \n> format=0x81377e0 \" COLUMNDEF :colname %s :typename \", args=0xbf800c08)\n> at vsnprintf.c:129\n> #3 0x809ccfb in appendStringInfo ()\n> #4 0x80b637b in _outColumnDef ()\n> #5 0x80b8107 in _outNode ()\n> #6 0x80b7d79 in _outNode ()\n> #7 0x80b7cab in _outConstraint ()\n> #8 0x80b84b7 in _outNode ()\n> #9 0x80b7d79 in _outNode ()\n> #10 0x80b63bb in _outColumnDef ()\n> #11 0x80b8107 in _outNode ()\n> #12 0x80b7d79 in _outNode ()\n> And this keeps going and going and going..\n\nHmm, that looks like a column constraint has somehow gotten recursively\nlinked back to its parent column definition node.\n\nI poked around in the code for serial-column constraints, and found that\nLockhart's last patch had a subtle bug --- he wrote more characters in\nthe constraint text without increasing the space palloc'd for the\nstring. That could maybe cause such a problem, depending on what\nhappened to be living next door to the string... But I don't really\nthink this explains your complaint, because according to the cvs log\nthat change was made on 5/13, and you reported a problem quite some time\nbefore that. Still, please fetch the current cvs sources or apply this\npatch:\n\n*** src/backend/parser/analyze.c.orig\tSun May 16 10:29:33 1999\n--- src/backend/parser/analyze.c\tMon May 17 00:50:07 1999\n***************\n*** 545,551 ****\n \t\t\t\t\tconstraint = makeNode(Constraint);\n \t\t\t\t\tconstraint->contype = CONSTR_DEFAULT;\n \t\t\t\t\tconstraint->name = sname;\n! \t\t\t\t\tcstring = palloc(9 + strlen(constraint->name) + 2 + 1);\n \t\t\t\t\tstrcpy(cstring, \"nextval('\\\"\");\n \t\t\t\t\tstrcat(cstring, constraint->name);\n \t\t\t\t\tstrcat(cstring, \"\\\"')\");\n--- 545,551 ----\n \t\t\t\t\tconstraint = makeNode(Constraint);\n \t\t\t\t\tconstraint->contype = CONSTR_DEFAULT;\n \t\t\t\t\tconstraint->name = sname;\n! \t\t\t\t\tcstring = palloc(10 + strlen(constraint->name) + 3 + 1);\n \t\t\t\t\tstrcpy(cstring, \"nextval('\\\"\");\n \t\t\t\t\tstrcat(cstring, constraint->name);\n \t\t\t\t\tstrcat(cstring, \"\\\"')\");\n\nand let us know if anything changes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 May 1999 01:03:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo "
},
{
"msg_contents": "On Mon, 17 May 1999, Tom Lane wrote:\n> I poked around in the code for serial-column constraints, and found that\n> Lockhart's last patch had a subtle bug --- he wrote more characters in\n> the constraint text without increasing the space palloc'd for the\n> string. That could maybe cause such a problem, depending on what\n> happened to be living next door to the string... But I don't really\n> think this explains your complaint, because according to the cvs log\n> that change was made on 5/13, and you reported a problem quite some time\n> before that. Still, please fetch the current cvs sources or apply this\n> patch:\n[snip]\n> and let us know if anything changes...\n\nYep, that takes care of it! Thanks\n\nOle Gjerde\n\n",
"msg_date": "Mon, 17 May 1999 00:11:09 -0500 (CDT)",
"msg_from": "Ole Gjerde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo "
},
{
"msg_contents": "> On Sat, 15 May 1999, Bruce Momjian wrote:\n> > I believe also we have:\n> > \tDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\n> > as an open item. Do you see these problems there?\n> \n> DROP TABLE worked, ALTER TABLE didn't.\n> \n> CREATE TABLE\n> DROP TABLE\n> CREATE INDEX\n> DROP INDEX\n> ALTER TABLE old RENAME TO new\n> \n> All works on linux now by my tests and regression(with patch below).\n\nApplied.\n\n\n> \n> Perhaps a mdrename() should be created? Or is something like this good\n> enough?\n\nI think this is good enough for now. Do people want an mdrename?\n\n> \n> Another thing. Should error messages from file related(or all system\n> calls) use strerror() to print out errno?\n> \n\n\nSeems like in the code you have, you just keep renaming until you can't\nfind any more files, so printing out any errno would be a problem,\nright?\n\nI assume you are taling about the initial rename. Not sure if strerror\nwould help. We really try and insulate the user from knowing how we are\ndoing the SQL we do, so it is possible it may be confusing. However, it\nmay be very helpful too. Not sure. Comments?\n\n\n> Ole Gjerde\n> \n> --- src/backend/commands/rename.c\t1999/05/10 00:44:59\t1.23\n> +++ src/backend/commands/rename.c\t1999/05/15 23:42:49\n> @@ -201,10 +201,13 @@\n> void\n> renamerel(char *oldrelname, char *newrelname)\n> {\n> +\tint\t\ti;\n> \tRelation\trelrelation;\t/* for RELATION relation */\n> \tHeapTuple\toldreltup;\n> \tchar\t\toldpath[MAXPGPATH],\n> -\t\t\t\tnewpath[MAXPGPATH];\n> +\t\t\t\tnewpath[MAXPGPATH],\n> +\t\t\t\ttoldpath[MAXPGPATH + 10],\n> +\t\t\t\ttnewpath[MAXPGPATH + 10];\n> \tRelation\tirelations[Num_pg_class_indices];\n> \n> \tif (!allowSystemTableMods && IsSystemRelationName(oldrelname))\n> @@ -229,6 +232,14 @@\n> \tstrcpy(newpath, relpath(newrelname));\n> \tif (rename(oldpath, newpath) < 0)\n> \t\telog(ERROR, \"renamerel: unable to rename file: %s\", oldpath);\n> +\n> +\tfor (i = 1;; i++)\n> +\t{\n> +\t\tsprintf(toldpath, \"%s.%d\", oldpath, i);\n> +\t\tsprintf(tnewpath, \"%s.%d\", newpath, i);\n> +\t\tif(rename(toldpath, tnewpath) < 0)\n> +\t\t\tbreak;\n> +\t}\n> \n> \tStrNCpy((((Form_pg_class) GETSTRUCT(oldreltup))->relname.data),\n> \t\t\tnewrelname, NAMEDATALEN);\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 May 1999 14:28:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
},
{
"msg_contents": "> Lockhart's last patch had a subtle bug\n\n:( Thanks for fixing it...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 18 May 1999 03:43:07 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v6.5 release ToDo"
}
] |
[
{
"msg_contents": "\nAny comments on this?\n\n\n> \n> Following is I believe evidence of a pretty bad bug in postgres. This is\n> the 990329 snapshot.\n> \n> When doing an insert into a table where one of the fields comes from a\n> SELECT from another table, it seems that if the table has the \"*\" after\n> it, to indicate sub-classes it doesn't work.\n> \n> The database is fresh and there are no indexes.\n> \n> The table looks like this....\n> \n> CREATE TABLE category (\n> name text,\n> image text,\n> url text,\n> parent oid\n> );\n> \n> \n> httpd=> select * from category;\n> name |image|url|parent\n> --------+-----+---+------\n> foo |foo | | 0\n> bar |bar | |158321\n> Products|.gif | | \n> (3 rows)\n> \n> httpd=> select oid, * FROM category* where name = 'foo';\n> oid|name|image|url|parent\n> ------+----+-----+---+------\n> 158321|foo |foo | | 0\n> (1 row)\n> \n> httpd=> insert into category(name, image, parent) SELECT 'boo', 'boo',\n> oid FROM category* where name = 'foo';\n> INSERT 158370 1\n> httpd=> select * from category;\n> name |image|url|parent\n> --------+-----+---+------\n> foo |foo | | 0\n> bar |bar | |158321\n> Products|.gif | | \n> (3 rows)\n> \n> Ok, what's going on here. The 'boo' record did not appear!\n> \n> httpd=> insert into category(name, image, parent) SELECT 'boo', 'boo',\n> oid FROM category where name = 'foo';\n> INSERT 158374 1\n> httpd=> select * from category;\n> name |image|url|parent\n> --------+-----+---+------\n> foo |foo | | 0\n> bar |bar | |158321\n> Products|.gif | | \n> boo |boo | |158321\n> (4 rows)\n> \n> We dropped the \"*\" from the FROM clause and now the record does appear.\n> \n> A bug?\n> \n> \n> -- \n> Chris Bitmead\n> http://www.bigfoot.com/~chris.bitmead\n> mailto:[email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 11:48:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Pretty bad bug in Postgres."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Any comments on this?\n> \n> >\n> > Following is I believe evidence of a pretty bad bug in postgres. This is\n> > the 990329 snapshot.\n ^^^^^^\nSnapshot's too old?\n\n> > httpd=> insert into category(name, image, parent) SELECT 'boo', 'boo',\n> > oid FROM category* where name = 'foo';\n> > INSERT 158370 1\n> > httpd=> select * from category;\n> > name |image|url|parent\n> > --------+-----+---+------\n> > foo |foo | | 0\n> > bar |bar | |158321\n> > Products|.gif | |\n> > (3 rows)\n> >\n> > Ok, what's going on here. The 'boo' record did not appear!\n\nI can't reproduce this in current...\n\nVadim\n",
"msg_date": "Tue, 11 May 1999 00:23:04 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Pretty bad bug in Postgres."
}
] |
[
{
"msg_contents": "\nDone.\n\n\n> The version of the Python interface, PyGreSQL, is still 2.2 in the \n> src/interfaces/python directory in the current CVS. The current version\n> of PyGreSQL is 2.3, available from http://www.druid.net/pygresql/:\n> \n> Important changes for 2.3\n> \n> connect.host returns \"localhost\" when connected to Unix socket \n> ([email protected])\n> Use PyArg_ParseTupleAndKeywords in connect() ([email protected]) \n> fixes and cleanups ([email protected]) \n> Fixed memory leak in dictresult() ([email protected]) \n> Deprecated pgext.py - functionality now in pg.py \n> More cleanups to the tutorial \n> Added fileno() method - [email protected] (Mikhail Terekhov) \n> added money type to quoting function \n> Compiles cleanly with more warnings turned on \n> Returns PostgreSQL error message on error \n> Init accepts keywords (Jarkko Torppa) \n> Convenience functions can be overridden (Jarkko Torppa) \n> added close() method \n> \n> Can this be updated before 6.5 is released?\n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:02:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Python interface is out of date"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > The version of the Python interface, PyGreSQL, is still 2.2 in the \n> > src/interfaces/python directory in the current CVS. The current version\n> > of PyGreSQL is 2.3, available from http://www.druid.net/pygresql/:\n\nHeh. Now I'm about to release version 3.0. I suspect it won't be\nready in time but I suppose that's the sort of thing that can go in after\nrelease, right? There are a few bugs in 2.3, mostly relating to money.\nThere are also a few new features.\n\n - Insert returns None if the user doesn't have select permissions\n on the table. It can (and does) happen that one has insert but\n not select permissions on a table.\n - Added ntuples() method to query object ([email protected])\n - Corrected a bug related to getresult() and the money type\n - Corrected a but related to negative money amounts\n - Allow update based on primary key if munged oid not available\n - Add many __doc__ strings. ([email protected])\n\nHopefully we will have the new DB-SIG interface ready soon and 3.0 will\nbe released. In the meantime, the beta versions are generally pretty\nstable if anyone wants to use them. The latest beta is always here.\n\n ftp://ftp.druid.net/pub/distrib/PyGreSQL-beta.tgz\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 10 May 1999 13:46:07 -0400 (EDT)",
"msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Python interface is out of date"
}
] |
[
{
"msg_contents": "\nAnyone on this one?\n\n\n> I have been chasing some of the various bug reports involving HAVING\n> clauses in sub-SELECTs. A couple of examples are:\n> \n> select name from t1 where name in\n> (select name from t1 group by name having count(*) = 2);\n> \n> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> \n> select name from t1 where name in\n> (select name from t1 group by name having 2 = count(*));\n> \n> ERROR: This could have been done in a where clause!!\n> \n> \n> I think that both of these errors are at least partially the fault of\n> rewriteHandler.c. The first message is coming from\n> modifyAggrefMakeSublink(). It looks like the code simply doesn't bother\n> to handle the case where the aggregate is on the left-hand side ---\n> is there a reason for that?\n> \n> The second one is more subtle. What is happening is that in the rewrite\n> step, modifyAggrefQual() scans the outer WHERE clause all the way down\n> into the sub-SELECT, where it finds an occurrence of count(*) and\n> replaces it by a parameter. The reported error comes when later\n> processing of the sub-SELECT finds that its having clause contains no\n> aggregate functions anymore.\n> \n> modifyAggrefQual()'s behavior would be correct if we wanted to assume\n> that the count() aggregate is associated with the *outer* SELECT and\n> is being propagated into the inner select as a constant. But that's\n> not the most reasonable reading of this query, IMHO (unless it is\n> mandated by some requirement of SQL92?). Even more to the point, the\n> rest of the parser thinks that aggregates are not allowed in WHERE\n> clauses:\n> \n> select name from t1 where 2 = count(*);\n> ERROR: Aggregates not allowed in WHERE clause\n> \n> which agrees with my understanding of the semantics. So why is\n> modifyAggrefQual() searching the outer select's WHERE clause in the\n> first place?\n> \n> This leads to a definitional question: should it be possible to refer\n> to an aggregate on the outer SELECT inside a sub-SELECT, and if so how?\n> I tried\n> \n> select name from t1 as outer1 group by name having name in\n> (select name from t1 as inner1 having\n> count(inner1.name) = count(outer1.name) );\n> ERROR: Illegal use of aggregates or non-group column in target list\n> \n> but as you can see, the system did not take the hint.\n> \n> So, several probable bugs in rewrite:\n> * omitted support for aggregate on lefthand side\n> * shouldn't be looking for aggregates in WHERE clause\n> * should be distinguishing which level of query an aggregate is\n> associated with\n> \n> But I'm not familiar enough with rewrite to want to start hacking on it.\n> Anyone?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:11:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Some info about subselect/having problems"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n>\n> Anyone on this one?\n>\n>\n> > I have been chasing some of the various bug reports involving HAVING\n> > clauses in sub-SELECTs. A couple of examples are:\n> >\n> > select name from t1 where name in\n> > (select name from t1 group by name having count(*) = 2);\n> >\n> > ERROR: rewrite: aggregate column of view must be at rigth side in qual\n> >\n> > select name from t1 where name in\n> > (select name from t1 group by name having 2 = count(*));\n> >\n> > ERROR: This could have been done in a where clause!!\n> >\n> >\n> > I think that both of these errors are at least partially the fault of\n> > rewriteHandler.c. The first message is coming from\n> > modifyAggrefMakeSublink(). It looks like the code simply doesn't bother\n> > to handle the case where the aggregate is on the left-hand side ---\n> > is there a reason for that?\n\n Yes. The SubLink node needs an Expr on the left-hand side. At\n the time I implemented the modifyAggrefMakeSublink() (which\n is still something I don't like because it's bogus when it\n comes to user defined GROUP BY clauses), the pg_operator\n class was in a very bad state WRT the negator/commutator\n operators. Now that pg_operator is fixed, we could swap the\n sides and use the negator instead. But...\n\n> >\n> > The second one is more subtle. What is happening is that in the rewrite\n> > step, modifyAggrefQual() scans the outer WHERE clause all the way down\n> > into the sub-SELECT, where it finds an occurrence of count(*) and\n> > replaces it by a parameter. The reported error comes when later\n> > processing of the sub-SELECT finds that its having clause contains no\n> > aggregate functions anymore.\n> >\n> > modifyAggrefQual()'s behavior would be correct if we wanted to assume\n> > that the count() aggregate is associated with the *outer* SELECT and\n> > is being propagated into the inner select as a constant. But that's\n> > not the most reasonable reading of this query, IMHO (unless it is\n> > mandated by some requirement of SQL92?). Even more to the point, the\n> > rest of the parser thinks that aggregates are not allowed in WHERE\n> > clauses:\n> >\n> > select name from t1 where 2 = count(*);\n> > ERROR: Aggregates not allowed in WHERE clause\n> >\n> > which agrees with my understanding of the semantics. So why is\n> > modifyAggrefQual() searching the outer select's WHERE clause in the\n> > first place?\n\n Right so far. The searching is done because the aggregate\n could be the result of a previous view rewrite.\n\n CREATE VIEW v1 AS SELECT a, count(b) AS n FROM t1\n GROUP BY a;\n\n SELECT * FROM v1 WHERE 2 = n;\n\n Again this one is bogus (doing it in a join with some totally\n different grouping). It was just a first step to make\n something working. Again the final solution would only be a\n subselecting RTE.\n\n Aggregates in views are still a good way to show the limits\n of the rewrite system.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 10 May 1999 19:07:20 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Some info about subselect/having problems"
}
] |
[
{
"msg_contents": "\nWhat did we decide on this?\n\n> Hello,\n> \n> For What Its Worth:\n> \n> I am just \"Joe User\" so please forgive my ignorance.\n> I have a patch for 6.5 which implements the Oracle\n> TRUNCATE statement.\n> \n> >From the Oracle Server 7 manual...\n> \n> You can use the TRUNCATE command to quickly remove\n> all rows from a table. Removing rows with the \n> TRUNCATE command is faster than removing rows with \n> the DELETE command for these reasons:\n> \n> 1] The TRUNCATE command is a Data Definition Language \n> command and generates no rollback information.\n> \n> 2] Truncating a table does not fire the table's\n> DELETE triggers.\n> \n> Deleting rows with the TRUNCATE command is also more \n> convienient for these reasons:\n> \n> 1] Dropping and recreating invalidates the table's\n> dependent objects, while truncating does not.\n> \n> 2] Dropping and recreating requires you to regrant\n> object privileges while truncating does not.\n> \n> 3] Dropping and recreating requires you to recreate\n> the table's indexes and integrity constraints \n> while truncating does not.\n> \n> You cannot rollback a TRUNCATE statement.\n> \n> ....\n> \n> In addition, using the TRUNCATE statement on large\n> tables before a vacuum dramatically reduces \n> vacuuming times, since vacuum no longer needs to \n> perform large index deletes (row by row) for a newly\n> emptied table.\n> \n> For example, on my Linux RedHat 90Mhz Pentium, 48M\n> RAM, a DELETE on a 30K row table tabkes approx.\n> 5 seconds. Vacuuming the table takes minutes and\n> consumes all RAM on the machine. The TRUNCATE\n> command, however, is instantaneous.\n> \n> Anyways, what should I do with this patch? Is this \n> something people would want? We do large imports of\n> mainframe datasets into tables on a nightly basis.\n> We intend to grant select privileges on these tables \n> to a large base of users (a network of hospitals),\n> which will be using the system 24 hours a day, \n> 7 days a week. The TRUNCATE command is used to make\n> administration of privileges more sane, allow for\n> referential integrity triggers (check_primary_key)\n> to be used on a table which needs to be \"refreshed\"\n> on a nightly basis, and allows for faster processing.\n> \n> It patches cleanly against 6.5beta, and I have a \n> patch for 6.4 as well...\n> \n> What should I do?\n> \n> Marcus Mascari ([email protected])\n> \n> \n> \n> \n> \n> \n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:13:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Oracle TRUNCATE statement"
}
] |
[
{
"msg_contents": "> > SQL92 uses TEMPORARY instead, I think it would be \n> > interesting to keep compatibility with SQL Standard.\n> > CREATE [{ GLOBAL | LOCAL } TEMPORARY ] TABLE class_name\n> \n> Bruce, is this OK? If so, I'll put it in (if someone else hasn't\n> already done so).\n\nWe only support LOCAL, I think, because the visibility of the table is\nonly local.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:14:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CREATE TEMP TABLE"
},
{
"msg_contents": "> > > SQL92 uses TEMPORARY instead, I think it would be\n> > > interesting to keep compatibility with SQL Standard.\n> > > CREATE [{ GLOBAL | LOCAL } TEMPORARY ] TABLE class_name\n> > Bruce, is this OK? If so, I'll put it in (if someone else hasn't\n> > already done so).\n> We only support LOCAL, I think, because the visibility of the table is\n> only local.\n\nOK. I've got patches to include the LOCAL/GLOBAL syntax, and an elog\nerror if GLOBAL is specified. Will apply soon...\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 10 May 1999 17:29:44 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CREATE TEMP TABLE"
}
] |
[
{
"msg_contents": "\n\nThis is so weird, I can't even explain it.\n\n\n> \n> Does the following indicate a bug? It sure is wierd. Maybe some of these\n> statements aren't supported by postgresql (??), but the outcome doesn't\n> make sense to me.\n> \n> httpd=> CREATE TABLE x (y text);\n> CREATE\n> httpd=> CREATE VIEW z AS select * from x;\n> CREATE\n> httpd=> CREATE TABLE a (b text) INHERITS(z);\n> CREATE\n> httpd=> INSERT INTO x VALUES ('foo');\n> INSERT 168602 1\n> httpd=> select * from z*;\n> y \n> ---\n> foo\n> foo\n> (2 rows)\n> \n> How did we suddenly get two rows??\n> \n> -- \n> Chris Bitmead\n> http://www.bigfoot.com/~chris.bitmead\n> mailto:[email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:15:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Heads up: does RULES regress test still work for you?"
},
{
"msg_contents": ">\n>\n>\n> This is so weird, I can't even explain it.\n\n I can reproduce it - yes totally weird :-)\n\n Can only guess where the problem might be, because I'm not\n familiar with inheritance and the underlying code. I think\n it's the fact that after rewriting the wrong RTE (that one of\n the view z which isn't referenced any more) is marked 'inh\n true'.\n\n Seems that the inheritance is resolved AFTER the rewriting\n somewhere in the planner. If that's true, inheriting of views\n might become a very tricky (maybe impossible) thing at all.\n How could someone inherit a join? Which RTE's of a view\n should be marked 'inh true' then?\n\n>\n>\n> >\n> > Does the following indicate a bug? It sure is wierd. Maybe some of these\n> > statements aren't supported by postgresql (??), but the outcome doesn't\n> > make sense to me.\n> >\n> > httpd=> CREATE TABLE x (y text);\n> > CREATE\n> > httpd=> CREATE VIEW z AS select * from x;\n> > CREATE\n> > httpd=> CREATE TABLE a (b text) INHERITS(z);\n> > CREATE\n> > httpd=> INSERT INTO x VALUES ('foo');\n> > INSERT 168602 1\n> > httpd=> select * from z*;\n> > y\n> > ---\n> > foo\n> > foo\n> > (2 rows)\n> >\n> > How did we suddenly get two rows??\n> >\n> > --\n> > Chris Bitmead\n> > http://www.bigfoot.com/~chris.bitmead\n> > mailto:[email protected]\n> >\n> >\n>\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 25 May 1999 15:47:49 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Heads up: does RULES regress test still work for you?"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n>> This is so weird, I can't even explain it.\n\n> I can reproduce it - yes totally weird :-)\n\n>>>> Does the following indicate a bug? It sure is wierd. Maybe some of these\n>>>> statements aren't supported by postgresql (??), but the outcome doesn't\n>>>> make sense to me.\n>>>> \n>>>> httpd=> CREATE TABLE x (y text);\n>>>> CREATE\n>>>> httpd=> CREATE VIEW z AS select * from x;\n>>>> CREATE\n>>>> httpd=> CREATE TABLE a (b text) INHERITS(z);\n>>>> CREATE\n>>>> httpd=> INSERT INTO x VALUES ('foo');\n>>>> INSERT 168602 1\n>>>> httpd=> select * from z*;\n>>>> y\n>>>> ---\n>>>> foo\n>>>> foo\n>>>> (2 rows)\n>>>> \n>>>> How did we suddenly get two rows??\n\nIs it a bug? It looks to me like a inherited z's view of x (and thus\nwhen you selected from both a and z by using \"z*\", you got x's contents\ntwice). Is that wrong? If so, why?\n\nIf inheriting from a view is allowed at all (maybe it shouldn't be),\nthen I'd *expect* the view-ness to be inherited.\n\nOffhand, given that tables and views are different kinds of things,\nallowing a table to inherit from a view seems like a bad idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 May 1999 10:37:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Heads up: does RULES regress test still work for you? "
},
{
"msg_contents": "\nAdded to TODO list.\n\n\n> \n> Does the following indicate a bug? It sure is wierd. Maybe some of these\n> statements aren't supported by postgresql (??), but the outcome doesn't\n> make sense to me.\n> \n> httpd=> CREATE TABLE x (y text);\n> CREATE\n> httpd=> CREATE VIEW z AS select * from x;\n> CREATE\n> httpd=> CREATE TABLE a (b text) INHERITS(z);\n> CREATE\n> httpd=> INSERT INTO x VALUES ('foo');\n> INSERT 168602 1\n> httpd=> select * from z*;\n> y \n> ---\n> foo\n> foo\n> (2 rows)\n> \n> How did we suddenly get two rows??\n> \n> -- \n> Chris Bitmead\n> http://www.bigfoot.com/~chris.bitmead\n> mailto:[email protected]\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 Sep 1999 15:53:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Heads up: does RULES regress test still work for you?"
}
] |
[
{
"msg_contents": "\nDo we have a problem here? Can someone explain it? Is it the\nconversion of the types?\n\n\n> \n> Hi,\n> \n> I have a fairly big table (a tacacs log) of about 250,000 tuples.\n> I created a new log table with more rows and with different types (for example\n> some fields have changed from int4 to int8 or from varchar to inet).\n> \n> I tryied to copy all the data from one table to the other using\n> \n> INSERT INTO log SELECT list_of_fields FROM log2;\n> \n> list_of_fields is an ordered list of the fields to import from log2 and default\n> values to insert into log (mostly nulls).\n> \n> If I try to insert all the 250,000 tuples, postgres eats all my memory and\n> fails.\n> If I try to insert a subset (20,000 tuples), I saw the memory usage grow up to\n> 18 MB and it succeded.\n> \n> It looks like postgres tryies to put the result of the SELECT in memory before\n> starting to INSERT.\n> \n> This makes INSERT almost unusable for bulk copying.\n> \n> I found another problem... there's apparently no conversion function from\n> varchar to inet... how can I do the conversion ?\n> \n> Here's the SQL statement:\n> \n> insert into log select username, server, pop, remaddr, port, service, NULL,\n> privilege, authenmethod, authentype, authenservice, logtime, starttime,\n> elapsedtime, bytesin, bytesout, paksin, paksout, callerid, callednumber, NULL,\n> NULL, NULL, NULL, NULL, NULL from log2;\n> \n> Tryied on 6.4.2 and 6.5beta1 on Linux 2.2.6\n> \n> Bye!\n> \n> -- \n> Daniele\n> \n> -------------------------------------------------------------------------------\n> Daniele Orlandi - Utility Line Italia - http://www.orlandi.com\n> Via Mezzera 29/A - 20030 - Seveso (MI) - Italy\n> -------------------------------------------------------------------------------\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 12:18:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] INSERT INTO ... SELECT eats all my memory"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Do we have a problem here? Can someone explain it? Is it the\n> conversion of the types?\n> \n> >\n> > insert into log select username, server, pop, remaddr, port, service, NULL,\n> > privilege, authenmethod, authentype, authenservice, logtime, starttime,\n> > elapsedtime, bytesin, bytesout, paksin, paksout, callerid, callednumber, NULL,\n> > NULL, NULL, NULL, NULL, NULL from log2;\n\nEXPLAIN VERBOSE _query_above_\n\n?\n\nVadim\n",
"msg_date": "Tue, 11 May 1999 00:39:22 +0800",
"msg_from": "Vadim Mikheev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] INSERT INTO ... SELECT eats all my memory"
}
] |
[
{
"msg_contents": "Due to a rare OS crash while my mailbox was being re-synced, I have lost\nall messages sent to me in the past 12 hours. If anyone sent anything\nto me personally and not to the list, could you please resend it.\n\nSorry.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 13:46:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lost my mailbox"
},
{
"msg_contents": "> Due to a rare OS crash while my mailbox was being re-synced, I have lost\n> all messages sent to me in the past 12 hours. If anyone sent anything\n> to me personally and not to the list, could you please resend it.\n> \n> Sorry.\n\nJust a quick followup. Don't panic. I have not lost any of the items\nfrom yesterday or before, nor have I lost any of the 6.5 TODO items I am\nabout to submit to the hacker list. (I do a backup at 3am every night,\nso I just restored it. It is a actually a file on another machine, so\nI restored it in <10 minutes).\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:36:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Lost my mailbox"
}
] |
[
{
"msg_contents": " Who needs the pretty printed output of nodeDisplay() ?\n\n I find the output of nodeToString() much more useful when\n debugging the rewrite system (especially when looking at\n querytree's resulting from joins with many RTE's, Resdom's,\n TLE's plus qual, group by etc. clauses. It's hard to keep\n track where I am if the entire tree spans hundreds of lines.\n\n Thus I would like to change the output of -d4 back to that of\n nodeToString(). Anyone against?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Mon, 10 May 1999 20:38:34 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Querytree printing"
},
{
"msg_contents": "> Who needs the pretty printed output of nodeDisplay() ?\n> \n> I find the output of nodeToString() much more useful when\n> debugging the rewrite system (especially when looking at\n> querytree's resulting from joins with many RTE's, Resdom's,\n> TLE's plus qual, group by etc. clauses. It's hard to keep\n> track where I am if the entire tree spans hundreds of lines.\n> \n> Thus I would like to change the output of -d4 back to that of\n> nodeToString(). Anyone against?\n\nI like the long pretty printing, with the indentation. You are\nsuggesting going to the more compressed format. Can we have -d4 do\ncompressed printing, and -d5 to pretty printing? That would be good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 15:28:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Querytree printing"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n> > Who needs the pretty printed output of nodeDisplay() ?\n> >\n> > I find the output of nodeToString() much more useful when\n> > debugging the rewrite system (especially when looking at\n> > querytree's resulting from joins with many RTE's, Resdom's,\n> > TLE's plus qual, group by etc. clauses. It's hard to keep\n> > track where I am if the entire tree spans hundreds of lines.\n> >\n> > Thus I would like to change the output of -d4 back to that of\n> > nodeToString(). Anyone against?\n>\n> I like the long pretty printing, with the indentation. You are\n> suggesting going to the more compressed format. Can we have -d4 do\n> compressed printing, and -d5 to pretty printing? That would be good.\n\n Yepp - that's the right solution. Done soon!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 11 May 1999 11:00:45 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Querytree printing"
}
] |
[
{
"msg_contents": "> > > I would like for you to also consider adding the following to gram.y for\n> > > version 6.5:\n> > > | NULL_P '=' a_expr\n> > > { $$ = makeA_Expr(ISNULL, NULL, $3, NULL); }\n> > > I know there was some discussion about this earlier including comments\n> > > against this. Access 97 is now generating the following statement and\n> > > error...\n> \n> I'm not certain that this patch should survive. There are at least two\n> other places in the parser which should be modified for symmetry (the\n> \"b_expr\" and the default expressions) and I recall that these lead to\n> more shift/reduce conflicts. Remember that shift/reduce conflicts\n> indicate that some portion of the parser logic can *never* be reached,\n> which means that some feature (perhaps the new one, or perhaps an\n> existing one) is disabled.\n\n\nYes, that is true. There are several cases where we check just for =\nNULL and not NULL = in the internals, not the grammar.\n\n> \n> There is currently a single shift/reduce conflict in gram.y, and I'm\n> suprised to find that it is *not* due to the \"NULL_P '=' a_expr\" line.\n\nYep. I got that working with precidence for NULL, I think.\n\n> I'm planning on touching gram.y to hunt down the shift/reduce conflict\n> (from previous work I think it in Stefan's \"parens around selects\"\n> mods), and I'll look at the NULL_P issue again also.\n> \n> I'll reiterate something which everyone probably knows: \"where NULL =\n> expr\" is *not* standard SQL92, and any company selling products which\n> implement this rather than the standard \"where expr is NULL\" should\n> make your \"don't buy\" list, rather than your \"only buy\" list, which is\n> what they are trying to force you to do :(\n\nYes, but some of the tools output that, and I think that was the\ncomplaint. I can go either way on this.\n\n> \n> - Tom\n\nAny chance of making your signature Thomas, to not confuse it with Tom\nLane?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 15:01:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "NULL = col"
},
{
"msg_contents": "> Yes, that is true. There are several cases where we check just for =\n> NULL and not NULL = in the internals, not the grammar.\n\nThat part is probably OK, since both statements are internalized to be\nthe same.\n\n> > There is currently a single shift/reduce conflict in gram.y, and I'm\n> > suprised to find that it is *not* due to the \"NULL_P '=' a_expr\" line.\n> Yep. I got that working with precidence for NULL, I think.\n\nHmm.\n\n> Any chance of making your signature Thomas, to not confuse it with Tom\n> Lane?\n\nI'm trying to, but it's *so* many letters to type...\n\n - Tom^H^H^Hhomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 03:23:34 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NULL = col"
}
] |
[
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Given\n> >> create table t1 (name text, value float8);\n> >> \n> >> this fails:\n> >> SELECT name, value FROM t1 as touter WHERE\n> >> (value/(SELECT AVG(value) FROM t1 WHERE name = touter.name)) > 0.75;\n> >> ERROR: parser: '/' must return 'bool' to be used with subquery\n> \n> > Aren't you really saying WHERE col / (subselect). That doesn't return\n> > bool, so the message seems correct.\n> \n> No, look again: the result of the subselect is being used as an operand\n> within the WHERE clause:\n> \tWHERE (value/(SUBSELECT)) > 0.75;\n> \n> If the / were the toplevel operator in the WHERE then the message would\n> make sense, because the WHERE clause as a whole must yield boolean.\n> But that doesn't mean that the operator immediately above the subselect\n> must yield boolean.\n> \n> Besides, I already fixed this ;-)\n> \n> > We don't allow subselects in target lists, or inside expressions.\n> \n> We don't allow 'em in target lists, I think (anyone understand why not?)\n> but they work fine inside expressions in WHERE or HAVING.\n\nGee, I didn't know they worked inside an expression, but now looking\nat the grammar, I remember seeing that they should.\n\nAdded to TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 15:07:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "subselects"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> If the / were the toplevel operator in the WHERE then the message would\n>> make sense, because the WHERE clause as a whole must yield boolean.\n>> But that doesn't mean that the operator immediately above the subselect\n>> must yield boolean.\n>> \n>> Besides, I already fixed this ;-)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n> Added to TODO list.\n\nAdded what to the TODO list? The particular behavior complained of\nis history...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 15:52:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselects "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> If the / were the toplevel operator in the WHERE then the message would\n> >> make sense, because the WHERE clause as a whole must yield boolean.\n> >> But that doesn't mean that the operator immediately above the subselect\n> >> must yield boolean.\n> >> \n> >> Besides, I already fixed this ;-)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> > Added to TODO list.\n> \n> Added what to the TODO list? The particular behavior complained of\n> is history...\n\nOh, I thought you had already fixed this long ago, but it was now being\nreported as broken. Removed from list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 16:30:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselects"
}
] |
[
{
"msg_contents": "I realize that supporting Access97 is going to be a pain and somewhat\nundesirable, but I think that there are a lot of people like me who choose\nAccess97 because it seemed to be the best tool for the job. I am not overly\nsatisfied with Access97 but it gets the job done. Now, I would like to use\na real database and I don't want to invest any more $ in Microsoft, so I\nchoose Postgres. I don't what to have to rewrite my entire front end, only\nthe back end. I have not yet found a tool to replace Access97 as a front\nend. I am considering TK but creating and printing reports is very limited\nand would take too long for me to migrate all of my reports to tk. This\nsaid, I would really appreciate it if PostgreSQL would support Access97 as a\nfront end. I am sure many will follow the same path if the we make Postgres\nan easy and effective alternative to SQL Server.\n\nI would have a preference to have NULL = a_expr supported. Access97 has\nproblems without it. Of course, I am will to insert this change in my local\nsource tree. \n\nThanks, Michael\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tMonday, May 10, 1999 1:02 PM\n> To:\tThomas G. Lockhart\n> Cc:\tPostgreSQL-development\n> Subject:\t[HACKERS] NULL = col\n> \n> > > > I would like for you to also consider adding the following to gram.y\n> for\n> > > > version 6.5:\n> > > > | NULL_P '=' a_expr\n> > > > { $$ = makeA_Expr(ISNULL, NULL, $3, NULL); }\n> > > > I know there was some discussion about this earlier including\n> comments\n> > > > against this. Access 97 is now generating the following statement\n> and\n> > > > error...\n> > \n> > I'm not certain that this patch should survive. There are at least two\n> > other places in the parser which should be modified for symmetry (the\n> > \"b_expr\" and the default expressions) and I recall that these lead to\n> > more shift/reduce conflicts. Remember that shift/reduce conflicts\n> > indicate that some portion of the parser logic can *never* be reached,\n> > which means that some feature (perhaps the new one, or perhaps an\n> > existing one) is disabled.\n> \n> \n> Yes, that is true. There are several cases where we check just for =\n> NULL and not NULL = in the internals, not the grammar.\n> \n> > \n> > There is currently a single shift/reduce conflict in gram.y, and I'm\n> > suprised to find that it is *not* due to the \"NULL_P '=' a_expr\" line.\n> \n> Yep. I got that working with precidence for NULL, I think.\n> \n> > I'm planning on touching gram.y to hunt down the shift/reduce conflict\n> > (from previous work I think it in Stefan's \"parens around selects\"\n> > mods), and I'll look at the NULL_P issue again also.\n> > \n> > I'll reiterate something which everyone probably knows: \"where NULL =\n> > expr\" is *not* standard SQL92, and any company selling products which\n> > implement this rather than the standard \"where expr is NULL\" should\n> > make your \"don't buy\" list, rather than your \"only buy\" list, which is\n> > what they are trying to force you to do :(\n> \n> Yes, but some of the tools output that, and I think that was the\n> complaint. I can go either way on this.\n> \n> > \n> > - Tom\n> \n> Any chance of making your signature Thomas, to not confuse it with Tom\n> Lane?\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:19:20 -0500",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NULL = col"
},
{
"msg_contents": "> I realize that supporting Access97 is going to be a pain and somewhat\n> undesirable, but I think that there are a lot of people like me who choose\n> Access97 because it seemed to be the best tool for the job. I am not overly\n> satisfied with Access97 but it gets the job done. Now, I would like to use\n> a real database and I don't want to invest any more $ in Microsoft, so I\n> choose Postgres. I don't what to have to rewrite my entire front end, only\n> the back end. I have not yet found a tool to replace Access97 as a front\n> end. I am considering TK but creating and printing reports is very limited\n> and would take too long for me to migrate all of my reports to tk. This\n> said, I would really appreciate it if PostgreSQL would support Access97 as a\n> front end. I am sure many will follow the same path if the we make Postgres\n> an easy and effective alternative to SQL Server.\n> \n> I would have a preference to have NULL = a_expr supported. Access97 has\n> problems without it. Of course, I am will to insert this change in my local\n> source tree. \n\nMaybe we only need to have it working where we have already added it,\nand not all over the place. We can release what we have, which hits %95\nof the NULL = col uses, and wait and see if people have problems trying\nto do this non-standard thing someplace else.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 15:30:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NULL = col"
}
] |
[
{
"msg_contents": "On Sat, 8 May 1999 22:58:01 +0200, \"GreatFreak\" <[email protected]>\nwrote:\n\n>\n>Hey Leute!\n>Kann jemand von euch mir eine gute URL sagen, wo viel Hack - und Crack\n>Programme man herunterladen kann?!\n>Und wo kriegt man heute kostenlos Xenix oder Linux (mit GUI nat�rlich)?\n>\n>\n>\nWo liegt eigentlich .ru? \n\nMein Gott, in Deutschland wimmelt es von Zeitungen mit irgendeinem\nAusprobier-Linux drauf. Lass Dir eine von Deinen kumpels schicken! Und\nschreibe zukuenftig Englisch, die Leute sind hier strikt\ninternational. \n\nZum Crack: Keine Ahnung, ich bin komplett legal. \n\n\n\n-translated- \n\n\nWhere in the world is .ru? \n\nMy god, in Germany the newsstores are crammed with magazines with eval\nversions of Linux, have your companion dudes send you one of these!\nAnd from now on write in English, the people here are strictly\ninternational. \n\nRe Crack: No idea, I'm completely legal. \n\n\n\n\n--\n\"Mom, there is a spider in the bathroom!\" \n\"Are you sure?\" - \"Yes!\" \n\"How many legs has it got?\" \n\"I can't tell - but they are all dangling from a thread!\" (c): RL\n",
"msg_date": "Mon, 10 May 1999 19:26:23 GMT",
"msg_from": "[email protected] (Gabriele Neukam)",
"msg_from_op": true,
"msg_subject": "Re: Frage!"
}
] |
[
{
"msg_contents": "OK, here is the list. Please send me changes. I will post this\nperiodically. I realize many are not show-stoppers, but it doesn't hurt\nto have everyone see the items. I will move them to the main TODO list\nas we get closer to final.\n\n---------------------------------------------------------------------------\n\n\nCan not insert/update oid\nDefault of '' causes crash in some cases\nEXPLAIN SELECT 1 UNION SELECT 2 crashes, rewrite system?\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\nmove UNION stuff into rewrite files\nCLUSTER failure if vacuum has not been performed in a while\nDo we want pg_dump -z to be the default?\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nstore binary-compatible type information in the system somewhere \ncreate table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\nadd ability to add comments to system tables using table/colname combination\nimprove handling of oids >2 gigs by making unsigned int usage consistent\n\toidin, oidout, pg_atoi\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\nMake Serial its own type?\nAdd support for & operator\nFix leak for expressions?, aggregates?\nRemove ERROR: check_primary_key: even number of arguments should be specified\nmake oid use oidin/oidout not int4in/int4out in pg_type.h\nmissing optimizer selectivities for date, etc.\nprocess const=const parts of OR clause in separate pass\nimprove LIMIT processing by using index to limit rows processed\nint8 indexing needs work?\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nhash on int2/char only looks a first bytes, and big-endian machines hash poorly\nDocs for INSERT INTO/SELECT INTO fix\nSome CASE() statements involving two tables crash\nTrigger regression test fails\nCREATE FUNCTION fails\nnodeResults.c and parse_clause.c give compiler warnings\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\nmove LIKE index optimization handling to the optimizer?\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nCREATE VIEW ignores DISTINCT?\nORDER BY mixed with DISTINCT causes duplicates\nselect * from test where test in (select * from test) fails with strange error\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nGROUP BY expression?\nCREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n\tFROM t1 GROUP BY b; SELECT count FROM v1; fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\npg_dump of groups problem\nselect 1; select 2 fails when sent not via psql, semicolon problem\ncreate operator *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nhave hashjoins use portals, not fixed-size memory\npg_dump -o -D does not work, and can not work currently, generate error?\nproblem with ALTER TABLE on inherited\npg_dump does not preserver NUMERIC precision, psql \\d should show precision\nDROP TABLE leaves INDEX file descriptor open\npg_interal.init, check for reliability on rebuild\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\ndumping out sequences should not be counted in pg_dump display\nGROUP BY can reference columns not in target list\nSELECT name, value FROM t1 as touter WHERE (value/(SELECT AVG(value) \n\tFROM t1 WHERE name = touter.name)) > 0.75; fails\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 15:32:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "6.5 TODO list"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, here is the list. Please send me changes.\n\n> Can not insert/update oid\n\nI think this is very unlikely to get fixed for 6.5, might as well just\nput it on the to-do-later list.\n\n> EXPLAIN SELECT 1 UNION SELECT 2 crashes, rewrite system?\n\nAlready fixed, I believe, unless someone can produce a case that\nfails...\n\n> move UNION stuff into rewrite files\n\nIs this the same as EXPLAIN issue above, or a different concern?\n\n> Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n\nThis is definitely a \"must fix\" item for 6.5, IMHO...\n\n> missing optimizer selectivities for date, etc.\n\nThe selectivity-estimation code needs major work. Again, I'd say just\nshove it to the 6.6 list...\n\n> int8 indexing needs work?\n\nIs this done, or are there still problems?\n\n> hash on int2/char only looks a first bytes, and big-endian machines hash poorly\n\nFixed.\n\n> Some CASE() statements involving two tables crash\n\nFixed.\n\n> CREATE FUNCTION fails\n\nIsn't this fixed? I wasn't aware of any remaining problems...\n\n> Make sure pg_internal.init generation can't cause unreliability\n> ...\n> pg_interal.init, check for reliability on rebuild\n\nDuplicate items, I think.\n\n> GROUP BY expression?\n\nI think I've fixed the problems here.\n\n> problem with ALTER TABLE on inherited\n> ...\n> ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n\nDuplicates?\n\n> GROUP BY can reference columns not in target list\n\nWhat's wrong with that?\n\n> SELECT name, value FROM t1 as touter WHERE (value/(SELECT AVG(value) \n> \tFROM t1 WHERE name = touter.name)) > 0.75; fails\n\nFixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 May 1999 17:49:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, here is the list. Please send me changes.\n> \n> > Can not insert/update oid\n> \n> I think this is very unlikely to get fixed for 6.5, might as well just\n> put it on the to-do-later list.\n\nMoved.\n\n\n> \n> > EXPLAIN SELECT 1 UNION SELECT 2 crashes, rewrite system?\n> \n> Already fixed, I believe, unless someone can produce a case that\n> fails...\n\nGood. Thanks.\n\n> \n> > move UNION stuff into rewrite files\n> \n> Is this the same as EXPLAIN issue above, or a different concern?\n\nThat is for me. I currently do UNION in postgres.c, shoud perhaps move\nto rewrite system. Not sure yet.\n\n> \n> > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> \n> This is definitely a \"must fix\" item for 6.5, IMHO...\n\nYes. It shows we are getting bigger. Most/all? of my commerical\ndatabase clients don't have 2-gig tables.\n\n\n> \n> > missing optimizer selectivities for date, etc.\n> \n> The selectivity-estimation code needs major work. Again, I'd say just\n> shove it to the 6.6 list...\n\nMoved.\n\n\n> \n> > int8 indexing needs work?\n> \n> Is this done, or are there still problems?\n\nNot sure. Just asking. I saw Thomas complaining about some testing\nproblems. Let's what he says.\n\n> \n> > hash on int2/char only looks a first bytes, and big-endian machines hash poorly\n> \n> Fixed.\n\nRemoved.\n\n> > Some CASE() statements involving two tables crash\n> \n> Fixed.\n\nGood.\n\n\n> > CREATE FUNCTION fails\n> \n> Isn't this fixed? I wasn't aware of any remaining problems...\n\nOK.\n\n> \n> > Make sure pg_internal.init generation can't cause unreliability\n> > ...\n> > pg_interal.init, check for reliability on rebuild\n> \n> Duplicate items, I think.\n\nYes.\n\n> \n> > GROUP BY expression?\n> \n> I think I've fixed the problems here.\n\nRemoved.\n\n> \n> > problem with ALTER TABLE on inherited\n> > ...\n> > ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> \n> Duplicates?\n\nYes.\n\n> > GROUP BY can reference columns not in target list\n> \n> What's wrong with that?\n\nIs that not a problem. What possible use would GROUP BY on columns not\nin target list be of use. Maybe it is. I remember someone asking for\nsomething like this. I will remove the item unless someone complains.\nI thought you were the one complaining about it.\n\n> \n> > SELECT name, value FROM t1 as touter WHERE (value/(SELECT AVG(value) \n> > \tFROM t1 WHERE name = touter.name)) > 0.75; fails\n> \n> Fixed.\n\nAlready removed.\n\nThanks for the updates.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 18:01:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "> int8 indexing needs work?\n\nafaik it is OK now. It worked with my limited testing anyway, and the\nproblems I thought I saw earlier were much more fundamental.\n\n> Docs for INSERT INTO/SELECT INTO fix\n\nDone. You did it, I did it (but hadn't committed yet), gotta be done\nby now...\n\n> Some CASE() statements involving two tables crash\n\nDone. Tom Lane reenabled the regression test queries which illustrated\nthis.\n\nNew items:\n\nWrite up CASE(), COALESCE(), IFNULL()\nAdd Vadim's isolation level syntax to gram.y, preproc.y. (Currently\ndone with string parsing outside of yacc.)\n\nI have some other stuff on my list for the docs, but will send along a\nlist of what's left in a few days.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 03:47:33 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "\nDone.\n\n\n> > int8 indexing needs work?\n> \n> afaik it is OK now. It worked with my limited testing anyway, and the\n> problems I thought I saw earlier were much more fundamental.\n> \n> > Docs for INSERT INTO/SELECT INTO fix\n> \n> Done. You did it, I did it (but hadn't committed yet), gotta be done\n> by now...\n> \n> > Some CASE() statements involving two tables crash\n> \n> Done. Tom Lane reenabled the regression test queries which illustrated\n> this.\n> \n> New items:\n> \n> Write up CASE(), COALESCE(), IFNULL()\n> Add Vadim's isolation level syntax to gram.y, preproc.y. (Currently\n> done with string parsing outside of yacc.)\n> \n> I have some other stuff on my list for the docs, but will send along a\n> list of what's left in a few days.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 00:30:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> > > GROUP BY can reference columns not in target list\n> >\n> > What's wrong with that?\n>\n> Is that not a problem. What possible use would GROUP BY on columns not\n> in target list be of use. Maybe it is. I remember someone asking for\n> something like this. I will remove the item unless someone complains.\n> I thought you were the one complaining about it.\n\n This can happen if the GROUP BY clause is coming from a view,\n but the grouping columns of the view aren't in the\n targetlist.\n\n Usually the view's grouping is required because of use of\n aggregates in the view, so omitting them isn't a good idea.\n\n I'm actually testing what happens if I use junk TLE's for\n rule generated GROUP BY entries...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 11 May 1999 15:20:32 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> move UNION stuff into rewrite files\n>> \n>> Is this the same as EXPLAIN issue above, or a different concern?\n\n> That is for me. I currently do UNION in postgres.c, shoud perhaps move\n> to rewrite system. Not sure yet.\n\nI already did it to fix EXPLAIN of unions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 09:54:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >>>> move UNION stuff into rewrite files\n> >> \n> >> Is this the same as EXPLAIN issue above, or a different concern?\n> \n> > That is for me. I currently do UNION in postgres.c, shoud perhaps move\n> > to rewrite system. Not sure yet.\n> \n> I already did it to fix EXPLAIN of unions.\n\nI see now. Thanks. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 11:09:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": ">\n> Bruce Momjian wrote:\n>\n> > > > GROUP BY can reference columns not in target list\n> > >\n> > > What's wrong with that?\n> >\n> > Is that not a problem. What possible use would GROUP BY on columns not\n> > in target list be of use. Maybe it is. I remember someone asking for\n> > something like this. I will remove the item unless someone complains.\n> > I thought you were the one complaining about it.\n>\n> This can happen if the GROUP BY clause is coming from a view,\n> but the grouping columns of the view aren't in the\n> targetlist.\n>\n> Usually the view's grouping is required because of use of\n> aggregates in the view, so omitting them isn't a good idea.\n>\n> I'm actually testing what happens if I use junk TLE's for\n> rule generated GROUP BY entries...\n\n Oh jesus - what a mess!\n\n I've tested it and it solved the problem with\n\n CREATE TABLE t1 (a int4, b int4);\n CREATE VIEW v1 AS SELECT b, count(b) AS n\n FROM t1 GROUP BY b;\n SELECT n FROM v1;\n\n This one produces now the correct output. But it does not\n handle\n\n SELECT n FROM v1 WHERE 2 < n;\n\n because the group clause isn't added to the aggregate\n subplan, the rule system generated for the qual - that's\n maybe fixable. Worse is, that one of the queries in the rules\n regression test fails, because a GROUP BY attribute wasn't\n found in the targetlist.\n\n The problem is that the planner modifies the targetlist if\n the operation is an INSERT/DELETE by first creating a clean\n one representing the result relation and then moving the old\n expressions into. Then it adds some junk stuff and specially\n marked TLE's from the original targetlist.\n\n BUT - during this (preprocess_targetlist()) all the resno's\n can get reassigned and later the planner tries to match the\n GROUP BY entries only by resno. But the resno's in the group\n clauses haven't been adjusted!\n\n Another interesting detail I found is this:\n\n CREATE TABLE t1 (a int4, b int4);\n -- insert some stuff into t1\n CREATE TABLE t2 (b int4, n int4);\n\n -- This one is working correct:\n SELECT b, count(b) FROM t1 GROUP BY b;\n\n -- This one doesn't\n INSERT INTO t2 SELECT b, count(b) FROM t1 GROUP BY b;\n ERROR: Illegal use of aggregates or non-group column in target list\n\n Ooops - I think it should work - especially because the plain\n SELECT returned the correct result. But it fail during the\n parse already and I don't get a parser debug output at all\n from tcop. As soon as this is fixed, I assume a problem with\n a query like this:\n\n INSERT INTO t2 (n) SELECT count(b) FROM t1 GROUP BY b;\n\n (currently it tells \"Aggregates not allowed in GROUP BY\n clause\" - what's totally braindead) The problem I expect is\n that the parser creates resno 1 for \"count(b)\" and a junk TLE\n with resno 2 for \"b\" which is referenced in the group clause.\n preprocess_targetlist() will now create the new targetlist\n with resno 1 = \"b\" = NULL, resno 2 = \"n\" = \"count(b)\" and\n maybe the junk resno 3 for the grouping. Voila, the group\n clause will reference the wrong TLE (still resno 2)!\n\n Currently I think the correct solution would be to expand the\n targetlist already in the rewrite system and leave it\n untouched in the planner. Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 11 May 1999 18:01:54 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> > > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> > \n> > This is definitely a \"must fix\" item for 6.5, IMHO...\n> \n> Yes. It shows we are getting bigger. Most/all? of my commerical\n> database clients don't have 2-gig tables.\n\nI'm not sure what is involved in fixing this, but...>2gig tables, in the\npast, were iffy to start with. if we can fix before v6.5, fine...if not,\nmy opinion is that isn't a release stopper. Or, rather...is it something\nthat, other then in testing, has affected anyone?\n\n> > > GROUP BY can reference columns not in target list\n> > \n> > What's wrong with that?\n> \n> Is that not a problem. What possible use would GROUP BY on columns not\n> in target list be of use. Maybe it is. I remember someone asking for\n> something like this. I will remove the item unless someone complains.\n> I thought you were the one complaining about it.\n\nEven if we don't remove it, sounds like a feature vs bug fix...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 11 May 1999 14:28:44 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> The problem is that the planner modifies the targetlist if\n> the operation is an INSERT/DELETE by first creating a clean\n> one representing the result relation and then moving the old\n> expressions into. Then it adds some junk stuff and specially\n> marked TLE's from the original targetlist.\n\nRight --- I imagine that doing that in the planner is a hangover from\nancient history before the rewriter existed at all. It does need to\nbe done, but if you think it'd be better done in the rewriter that's\nfine with me.\n\n> BUT - during this (preprocess_targetlist()) all the resno's\n> can get reassigned and later the planner tries to match the\n> GROUP BY entries only by resno. But the resno's in the group\n> clauses haven't been adjusted!\n\nUgh. I thought that was a pretty unrobust way of doing things :-(\nIf you change the lines in planner.c:\n\n /* Is it safe to use just resno to match tlist and glist items?? */\n if (grpcl->entry->resdom->resno == resdom->resno)\n\nto use equal() on the expr's of the two TLEs, does it work any better?\n\n> Currently I think the correct solution would be to expand the\n> targetlist already in the rewrite system and leave it\n> untouched in the planner. Comments?\n\nOK with me, but possibly rather a major change to be making this late\nin beta...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 14:20:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > > > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> > > \n> > > This is definitely a \"must fix\" item for 6.5, IMHO...\n> > \n> > Yes. It shows we are getting bigger. Most/all? of my commerical\n> > database clients don't have 2-gig tables.\n> \n> I'm not sure what is involved in fixing this, but...>2gig tables, in the\n> past, were iffy to start with. if we can fix before v6.5, fine...if not,\n> my opinion is that isn't a release stopper. Or, rather...is it something\n> that, other then in testing, has affected anyone?\n> \n> > > > GROUP BY can reference columns not in target list\n> > > \n> > > What's wrong with that?\n> > \n> > Is that not a problem. What possible use would GROUP BY on columns not\n> > in target list be of use. Maybe it is. I remember someone asking for\n> > something like this. I will remove the item unless someone complains.\n> > I thought you were the one complaining about it.\n> \n> Even if we don't remove it, sounds like a feature vs bug fix...\n\nWe are not saying these are show-stoppers, but things that should be\nresolved before final, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 14:23:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "> Ugh. I thought that was a pretty unrobust way of doing things :-(\n> If you change the lines in planner.c:\n> \n> /* Is it safe to use just resno to match tlist and glist items?? */\n> if (grpcl->entry->resdom->resno == resdom->resno)\n> \n> to use equal() on the expr's of the two TLEs, does it work any better?\n> \n> > Currently I think the correct solution would be to expand the\n> > targetlist already in the rewrite system and leave it\n> > untouched in the planner. Comments?\n> \n> OK with me, but possibly rather a major change to be making this late\n> in beta...\n\nBut it is already broken. Can't get worse, can it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 14:37:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "On Tue, May 11, 1999 at 02:28:44PM -0300, The Hermit Hacker wrote:\n> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > > > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> > > \n> > > This is definitely a \"must fix\" item for 6.5, IMHO...\n> > \n> > Yes. It shows we are getting bigger. Most/all? of my commerical\n> > database clients don't have 2-gig tables.\n> \n> I'm not sure what is involved in fixing this, but...>2gig tables, in the\n> past, were iffy to start with. if we can fix before v6.5, fine...if not,\n> my opinion is that isn't a release stopper. Or, rather...is it something\n> that, other then in testing, has affected anyone?\n\nHrm.. I've been hanging around on the postgres lists for six months or\nso, but I admit, other than a couple of toy projects, I haven't really\ndone much.\n\nHowever.. recently I have gotten involved in a project that\nis.. well.. huge. And it involves databases in a big way. I'd love\nto be able to use Postgres. I'd love to be able to say at the end\nthat _______ (very very large and well-known organization) uses, and\nin fact depends on, FreeBSD, Apache, Perl, and Postgres. We're a bit\nconcerned, though, since the databases will be a _lot_ bigger than\n2GB. Part of the reason for using free software is that if it does\nblow up, we can help fix it.\n\nAnyway, I guess my point is that there is some incentive here for\nhaving a postgres that is completely non-iffy when it comes to >2GB\ndatabases. Shortly we will be filling the system with test data and I\nwill be glad to help out as much as possible (which may not be much in\nthe way of code, as I've got my hands rather full right now).\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\[email protected] [email protected] http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Tue, 11 May 1999 14:44:53 -0400",
"msg_from": "Christopher Masto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "Christopher Masto <[email protected]> writes:\n> Anyway, I guess my point is that there is some incentive here for\n> having a postgres that is completely non-iffy when it comes to >2GB\n> databases. Shortly we will be filling the system with test data and I\n> will be glad to help out as much as possible (which may not be much in\n> the way of code, as I've got my hands rather full right now).\n\nGreat, we need some people keeping us honest. I don't think any of the\ncore developers have >2Gb databases (I sure don't).\n\nI think there are two known issues right now: the VACUUM one and\nsomething about DROP TABLE neglecting to delete the additional files\nfor a multi-segment table. To my mind the VACUUM problem is a \"must\nfix\" because you can't really live without VACUUM, especially not on\na huge database. The DROP problem is less severe since you could\nclean up by hand if necessary (not that it shouldn't get fixed of\ncourse, but we have more critical issues to deal with for 6.5).\n\nIn theory, you can test the behavior for >2Gb tables without actually\nneeding to expend much time in creating huge tables: just reduce the\nvalue of RELSEG_SIZE in include/config.h to some more convenient value,\nlike a couple meg, so that you can get a segmented table without so\nmuch effort. This doesn't speak to performance issues of course,\nbut at least you can check for showstopper bugs.\n\n(BTW, has anyone been thinking about increasing OID to more than 4\nbytes? Folks are going to start hitting the 4G-tuples-per-database\nmark pretty soon.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 15:06:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "On Tue, 11 May 1999, Bruce Momjian wrote:\n\n> > On Mon, 10 May 1999, Bruce Momjian wrote:\n> > \n> > > > > Vacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n> > > > \n> > > > This is definitely a \"must fix\" item for 6.5, IMHO...\n> > > \n> > > Yes. It shows we are getting bigger. Most/all? of my commerical\n> > > database clients don't have 2-gig tables.\n> > \n> > I'm not sure what is involved in fixing this, but...>2gig tables, in the\n> > past, were iffy to start with. if we can fix before v6.5, fine...if not,\n> > my opinion is that isn't a release stopper. Or, rather...is it something\n> > that, other then in testing, has affected anyone?\n> > \n> > > > > GROUP BY can reference columns not in target list\n> > > > \n> > > > What's wrong with that?\n> > > \n> > > Is that not a problem. What possible use would GROUP BY on columns not\n> > > in target list be of use. Maybe it is. I remember someone asking for\n> > > something like this. I will remove the item unless someone complains.\n> > > I thought you were the one complaining about it.\n> > \n> > Even if we don't remove it, sounds like a feature vs bug fix...\n> \n> We are not saying these are show-stoppers, but things that should be\n> resolved before final, no?\n\nBut, if htey should be resolved before final, doesn't taht make it a\nshow-stopper? *raised eyebrow*\n\nChristopher brings up a point about the >2gig vacuum problem...I'm kinda\ncurious as to what sort of time frame we'd be looking at on fixing that\n(1)...what sort of demographic this affects (2) and whether or not this\ncould be done as a 'side patch' so that those it does affects would have\naccess to it (3) ...\n\nAssuming that 1 is large and 2 small, 3 could/should be released sometime\nafter v6.5 is released and provided for those that it affects.\n\nIf 1 is small (<1week?), then the point is moot...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 11 May 1999 16:14:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "> (BTW, has anyone been thinking about increasing OID to more than 4\n> bytes? Folks are going to start hitting the 4G-tuples-per-database\n> mark pretty soon.)\n\nI have started to think about making our oid type more reliable as an\nunsigned int. Internally, it is unsigned int, but there are some issues\nwhere this is not portably hadled. It would double the current oid\nmaximum.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 15:42:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "On Tue, 11 May 1999, Tom Lane wrote:\n> I think there are two known issues right now: the VACUUM one and\n> something about DROP TABLE neglecting to delete the additional files\n> for a multi-segment table. To my mind the VACUUM problem is a \"must\n> fix\" because you can't really live without VACUUM, especially not on\n> a huge database. The DROP problem is less severe since you could\n> clean up by hand if necessary (not that it shouldn't get fixed of\n> course, but we have more critical issues to deal with for 6.5).\n\nI have been looking at the code for dropping the table. The code in\nmdunlink() seems to be correct, and *should* work. Of course it don't, so\nI'll do some more testing tonight and hopefully I can figure out why it\ndoesn't work.\n\nAs far as the VACUUM problem, I still haven't seen that. I have a couple\n~3GB tables, with one growing to 5-6GB in the next month or so. VACUUM\nruns just fine on both.\n\nI just got to thinking, what about indexes > 2GB? With my 3GB table one\nof the index is 540MB. Both with growth I might get there. Does that\nwork and does it use RELSEG_SIZE?\nI would guess that the same function(mdunlink, mdcreate, etc) is called\nfor all DROPs and CREATEs(through DestroyStmt, CreateStmt)? I don't\nunderstand postgres good enough to answer that for sure(but it does make\nsense).\n\nOle Gjerde\n\n",
"msg_date": "Tue, 11 May 1999 15:11:40 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "Hi!\nI found a bug, with $SUBJ - i mean ... so look at this:\nt=> begin;\nBEGIN\nt=> select nextval('some_seq');\nnextval\n-------\n 4\n(1 row)\nt=> rollback;\nROLLBACK\nt=> select nextval('some_seq');\nnextval\n-------\n 5 <<<< five! and NOT 4 again, but 4 expected i think\n(1 row)\n\nOr seqences are special case in transations?\n\n--\n Neko the Servant of Crash\n neko@(kornel.szif.hu|kva.hu) http://lsc.kva.hu\n\n",
"msg_date": "Tue, 11 May 1999 21:00:20 +0000 (/etc/localtime)",
"msg_from": "Vazsonyi Peter <[email protected]>",
"msg_from_op": false,
"msg_subject": "sequences vs. transactions"
},
{
"msg_contents": "[email protected] writes:\n> I have been looking at the code for dropping the table. The code in\n> mdunlink() seems to be correct, and *should* work. Of course it don't, so\n> I'll do some more testing tonight and hopefully I can figure out why it\n> doesn't work.\n\nActually it looks a little peculiar to me. The FileUnlink() routine in\nfd.c is defined to delete (unlink) the physical file after closing the\nvirtual file descriptor for it. If a VFD is being held for each segment\nof the table then the calls to FileNameUnlink ought to be unnecessary,\nnot to mention possible causes of trouble on NFS disks since there might\nstill be open file descriptors for the files.\n\n> I just got to thinking, what about indexes > 2GB? With my 3GB table one\n> of the index is 540MB. Both with growth I might get there. Does that\n> work and does it use RELSEG_SIZE?\n\nindex_destroy in backend/catalog/index.c looks to be coping with only\none segment file ... not sure why it doesn't go through md.c for this.\n\nGood luck tracking it all down...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 19:44:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "> Christopher Masto <[email protected]> writes:\n> > Anyway, I guess my point is that there is some incentive here for\n> > having a postgres that is completely non-iffy when it comes to >2GB\n> > databases. Shortly we will be filling the system with test data and I\n> > will be glad to help out as much as possible (which may not be much in\n> > the way of code, as I've got my hands rather full right now).\n> \n> Great, we need some people keeping us honest. I don't think any of the\n> core developers have >2Gb databases (I sure don't).\n\nI have one, but it's not for production, just a test data.\n\n> I think there are two known issues right now: the VACUUM one and\n> something about DROP TABLE neglecting to delete the additional files\n> for a multi-segment table. To my mind the VACUUM problem is a \"must\n> fix\" because you can't really live without VACUUM, especially not on\n> a huge database. The DROP problem is less severe since you could\n> clean up by hand if necessary (not that it shouldn't get fixed of\n> course, but we have more critical issues to deal with for 6.5).\n\nI have to admit that >2GB support is one of the most important issues\nbut not so sure if it's a show stopper for 6.5.\n\no even if it's solved, still other many issues for huge databases\nstill remain:\n\t- 4G-tuples-per-database problem (as you said)\n\t- index files cannot extend >2GB\n\t- vacuuming a huge table will take unacceptable long time (I\n\thave heard vacuum speeds up for 6.5, but I have not tried it\n\tyet for a big table. so this is just a my guess)\n\no it will take long time to solve all of them.\n---\nTatsuo Ishii\n",
"msg_date": "Wed, 12 May 1999 09:57:54 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list "
},
{
"msg_contents": "> Hi!\n> I found a bug, with $SUBJ - i mean ... so look at this:\n> t=> begin;\n> BEGIN\n> t=> select nextval('some_seq');\n> nextval\n> -------\n> 4\n> (1 row)\n> t=> rollback;\n> ROLLBACK\n> t=> select nextval('some_seq');\n> nextval\n> -------\n> 5 <<<< five! and NOT 4 again, but 4 expected i think\n> (1 row)\n> \n> Or seqences are special case in transations?\n\nSequence numbers are not re-used like normal transactions. The reason\nfor this is that we would have to lock the sequence table for the entire\nduration of the transaction until it was committed if we wanted to do\nthat.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 22:05:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequences vs. transactions"
},
{
"msg_contents": "> > I just got to thinking, what about indexes > 2GB? With my 3GB table one\n> > of the index is 540MB. Both with growth I might get there. Does that\n> > work and does it use RELSEG_SIZE?\n> \n> index_destroy in backend/catalog/index.c looks to be coping with only\n> one segment file ... not sure why it doesn't go through md.c for this.\n\nAdded to Open Items:\n\n\tMulti-segment indexes?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 22:11:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "> o even if it's solved, still other many issues for huge databases\n> still remain:\n> \t- 4G-tuples-per-database problem (as you said)\n\nThis will be hard.\n\n> \t- index files cannot extend >2GB\n> \t- vacuuming a huge table will take unacceptable long time (I\n> \thave heard vacuum speeds up for 6.5, but I have not tried it\n> \tyet for a big table. so this is just a my guess)\n> \n> o it will take long time to solve all of them.\n\nVacuum is already faster. Vacuum across segments, multi-segment\nindexes may be possible for 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 22:15:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>\n> > Ugh. I thought that was a pretty unrobust way of doing things :-(\n> > If you change the lines in planner.c:\n> >\n> > /* Is it safe to use just resno to match tlist and glist items?? */\n> > if (grpcl->entry->resdom->resno == resdom->resno)\n> >\n> > to use equal() on the expr's of the two TLEs, does it work any better?\n> >\n> > > Currently I think the correct solution would be to expand the\n> > > targetlist already in the rewrite system and leave it\n> > > untouched in the planner. Comments?\n> >\n> > OK with me, but possibly rather a major change to be making this late\n> > in beta...\n>\n> But it is already broken. Can't get worse, can it?\n\n And just using equal() wouldn't be enough. I tested what\n happens if the rewriter add's junk TLE's for group clauses\n coming from a view. If the query is a SELECT, anything is\n fine, but if it is an INSERT ... SELECT then the added TLE's\n needed by the group clauses get lost during\n preprocess_targetlist().\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 12:26:50 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] 6.5 TODO list"
},
{
"msg_contents": "\nOn Tue, 11 May 1999, Bruce Momjian wrote:\n> Sequence numbers are not re-used like normal transactions. The reason\n> for this is that we would have to lock the sequence table for the entire\n> duration of the transaction until it was committed if we wanted to do\n> that.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \nCorrect. ;-/. Thanx.\n\n",
"msg_date": "Sun, 16 May 1999 15:11:55 +0000 (/etc/localtime)",
"msg_from": "Vazsonyi Peter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: sequences vs. transactions"
}
] |
[
{
"msg_contents": "Works for me.\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tMonday, May 10, 1999 1:30 PM\n> To:\tMichael J Davis\n> Cc:\tThomas G. Lockhart; PostgreSQL-development\n> Subject:\tRe: [HACKERS] NULL = col\n> \n> > I realize that supporting Access97 is going to be a pain and somewhat\n> > undesirable, but I think that there are a lot of people like me who\n> choose\n> > Access97 because it seemed to be the best tool for the job. I am not\n> overly\n> > satisfied with Access97 but it gets the job done. Now, I would like to\n> use\n> > a real database and I don't want to invest any more $ in Microsoft, so I\n> > choose Postgres. I don't what to have to rewrite my entire front end,\n> only\n> > the back end. I have not yet found a tool to replace Access97 as a\n> front\n> > end. I am considering TK but creating and printing reports is very\n> limited\n> > and would take too long for me to migrate all of my reports to tk. This\n> > said, I would really appreciate it if PostgreSQL would support Access97\n> as a\n> > front end. I am sure many will follow the same path if the we make\n> Postgres\n> > an easy and effective alternative to SQL Server.\n> > \n> > I would have a preference to have NULL = a_expr supported. Access97 has\n> > problems without it. Of course, I am will to insert this change in my\n> local\n> > source tree. \n> \n> Maybe we only need to have it working where we have already added it,\n> and not all over the place. We can release what we have, which hits %95\n> of the NULL = col uses, and wait and see if people have problems trying\n> to do this non-standard thing someplace else.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 14:32:46 -0500",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NULL = col"
},
{
"msg_contents": "> Works for me.\n\nYeah, yeah, don't panic... ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 03:25:34 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] NULL = col"
}
] |
[
{
"msg_contents": "I have reconfigured my system so I automatically connect to IRC and\nAOL's Instant messenger when I start Netscape, so people will see me\nmore active in these areas.\n\nIn IRC, I am on channel #postgresql(see FAQ for info), and in AOL/IM, I\nam bmomjian. (All my neighbors are on AOL. What can I say?)\n\nAlso, I have fixed my webcam, so you can use my home page listed below\nto see it again. I have moved the camera upstairs to look out the\nwindow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n",
"msg_date": "Mon, 10 May 1999 17:31:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "IRC, webcam, IM"
},
{
"msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> In IRC, I am on channel #postgresql(see FAQ for info), and in AOL/IM, I\n> am bmomjian. (All my neighbors are on AOL. What can I say?)\n\n\tNote here that we are all on EFNet...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 11 May 1999 01:20:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] IRC, webcam, IM"
}
] |
[
{
"msg_contents": "I have reconfigured my system so I automatically connect to IRC and\nAOL's Instant messenger when I start Netscape, so people will see me\nmore active in these areas.\n\nIn IRC, I am on channel #postgresql(see FAQ for info), and in AOL/IM, I\nam bmomjian. (All my neighbors are on AOL. What can I say?)\n\nAlso, I have fixed my webcam, so you can use my home page listed below\nto see it again. I have moved the camera upstairs to look out the\nwindow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 17:31:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "IRC, webcam, IM"
}
] |
[
{
"msg_contents": "\nHere is the newest list.\n\n---------------------------------------------------------------------------\n\nCan not insert/update oid\nDefault of '' causes crash in some cases\nEXPLAIN SELECT 1 UNION SELECT 2 crashes, rewrite system?\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\nmove UNION stuff into rewrite files\nCLUSTER failure if vacuum has not been performed in a while\nDo we want pg_dump -z to be the default?\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nstore binary-compatible type information in the system somewhere \ncreate table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\nadd ability to add comments to system tables using table/colname combination\nimprove handling of oids >2 gigs by making unsigned int usage consistent\n\toidin, oidout, pg_atoi\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\nMake Serial its own type?\nAdd support for & operator\nFix leak for expressions?, aggregates?\nRemove ERROR: check_primary_key: even number of arguments should be specified\nmake oid use oidin/oidout not int4in/int4out in pg_type.h\nmissing optimizer selectivities for date, etc.\nprocess const=const parts of OR clause in separate pass\nimprove LIMIT processing by using index to limit rows processed\nint8 indexing needs work?\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nhash on int2/char only looks a first bytes, and big-endian machines hash poorly\nDocs for INSERT INTO/SELECT INTO fix\nSome CASE() statements involving two tables crash\nTrigger regression test fails\nCREATE FUNCTION fails\nnodeResults.c and parse_clause.c give compiler warnings\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\nmove LIKE index optimization handling to the optimizer?\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nCREATE VIEW ignores DISTINCT?\nORDER BY mixed with DISTINCT causes duplicates\nselect * from test where test in (select * from test) fails with strange error\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nGROUP BY expression?\nCREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n\tFROM t1 GROUP BY b; SELECT count FROM v1; fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\npg_dump of groups problem\nselect 1; select 2 fails when sent not via psql, semicolon problem\ncreate operator *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nhave hashjoins use portals, not fixed-size memory\npg_dump -o -D does not work, and can not work currently, generate error?\nproblem with ALTER TABLE on inherited\npg_dump does not preserver NUMERIC precision, psql \\d should show precision\nDROP TABLE leaves INDEX file descriptor open\npg_interal.init, check for reliability on rebuild\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\ndumping out sequences should not be counted in pg_dump display\nGROUP BY can reference columns not in target list\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 18:02:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New 6.5 Open Items list"
}
] |
[
{
"msg_contents": "\nSorry. Here is the new list.\n\n\n---------------------------------------------------------------------------\n\nDefault of '' causes crash in some cases\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\nmove UNION stuff into rewrite files\nCLUSTER failure if vacuum has not been performed in a while\nDo we want pg_dump -z to be the default?\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nstore binary-compatible type information in the system somewhere \ncreate table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\nadd ability to add comments to system tables using table/colname combination\nimprove handling of oids >2 gigs by making unsigned int usage consistent\n\toidin, oidout, pg_atoi\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\nMake Serial its own type?\nAdd support for & operator\nFix leak for expressions?, aggregates?\nRemove ERROR: check_primary_key: even number of arguments should be specified\nmake oid use oidin/oidout not int4in/int4out in pg_type.h\nprocess const=const parts of OR clause in separate pass\nimprove LIMIT processing by using index to limit rows processed\nint8 indexing needs work?\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nDocs for INSERT INTO/SELECT INTO fix\nTrigger regression test fails\nnodeResults.c and parse_clause.c give compiler warnings\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\nmove LIKE index optimization handling to the optimizer?\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nCREATE VIEW ignores DISTINCT?\nORDER BY mixed with DISTINCT causes duplicates\nselect * from test where test in (select * from test) fails with strange error\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nCREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n\tFROM t1 GROUP BY b; SELECT count FROM v1; fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\npg_dump of groups problem\nselect 1; select 2 fails when sent not via psql, semicolon problem\ncreate operator *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nhave hashjoins use portals, not fixed-size memory\npg_dump -o -D does not work, and can not work currently, generate error?\npg_dump does not preserver NUMERIC precision, psql \\d should show precision\nDROP TABLE leaves INDEX file descriptor open\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\ndumping out sequences should not be counted in pg_dump display\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 18:03:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Newer list"
}
] |
[
{
"msg_contents": "Well, since all you guys have all the info on optimizing PostgreSQL,\nsomeone might like to submit their wealth of knowledge to tunelinux.com\n\nThey have a PostgreSQL section, but it's still empty! Even simple things\nwould probably be very useful.\n\nTaral\n\n",
"msg_date": "Mon, 10 May 1999 19:58:58 -0500 (CDT)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization info"
},
{
"msg_contents": "> Well, since all you guys have all the info on optimizing PostgreSQL,\n> someone might like to submit their wealth of knowledge to tunelinux.com\n> \n> They have a PostgreSQL section, but it's still empty! Even simple things\n> would probably be very useful.\n> \n> Taral\n> \n> \n> \n\nCan you point them to our FAQ, item 2.11 currently. This is updated as\nwe learn new things, and should be easy to maintain.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 21:33:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimization info"
},
{
"msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> Can you point them to our FAQ, item 2.11 currently. This is updated as\n> we learn new things, and should be easy to maintain.\n\nThe FAQ is fscked up. It seems that the web server is sending text/plain\ninstead of text/html...\n\nTaral\n\n",
"msg_date": "Mon, 10 May 1999 20:52:28 -0500 (CDT)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimization info"
},
{
"msg_contents": "> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > Can you point them to our FAQ, item 2.11 currently. This is updated as\n> > we learn new things, and should be easy to maintain.\n> \n> The FAQ is fscked up. It seems that the web server is sending text/plain\n> instead of text/html...\n> \n> Taral\n> \n> \n\n\nI just checked, and the english FAQ is fine on the main server, and one\nof the mirrors I tested.\n\n\thttp://www.postgresql.org/docs/faq-english.shtml\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 22:43:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Optimization info"
},
{
"msg_contents": "On Mon, 10 May 1999, Bruce Momjian wrote:\n\n> I just checked, and the english FAQ is fine on the main server, and one\n> of the mirrors I tested.\n> \n> \thttp://www.postgresql.org/docs/faq-english.shtml\n\nNetscape must have glitched. Sorry.\n\nTaral\n\n",
"msg_date": "Mon, 10 May 1999 21:44:24 -0500 (CDT)",
"msg_from": "Taral <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Optimization info"
}
] |
[
{
"msg_contents": "> I have reconfigured my system so I automatically connect to IRC and\n> AOL's Instant messenger when I start Netscape, so people will see me\n> more active in these areas.\n> \n> In IRC, I am on channel #postgresql(see FAQ for info), and in AOL/IM, I\n> am bmomjian. (All my neighbors are on AOL. What can I say?)\n> \n> Also, I have fixed my webcam, so you can use my home page listed below\n> to see it again. I have moved the camera upstairs to look out the\n> window.\n\nLet me add one more thing. I typically did not use IRC because it was a\npain to check the window every so often to see if anything was\nhappening, and I often forgot to even start it up.\n\nI am now using zircon for IRC and tik for AOL/IM. Both programs can be\nminimized on startup of netscape, so if nothing happens, I don't even\nsee they are running.\n\nIf someone says something, the window de-iconifies and displays itself. \nThis makes it much more convenient for me. Both use tcl 8.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 21:41:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IRC, webcam, IM"
},
{
"msg_contents": "> Let me add one more thing. I typically did not use IRC because it was a\n> pain to check the window every so often to see if anything was\n> happening, and I often forgot to even start it up.\n> I am now using zircon for IRC and tik for AOL/IM. Both programs can be\n> minimized on startup of netscape, so if nothing happens, I don't even\n> see they are running.\n\nI found and built zircon-1.18; what version are you running? I see\nscrappy on there but not yourself. Can you remind me which IRC server\nyou had recommended in the past?\n\nbtw, zircon looks pretty neat.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Thu, 13 May 1999 13:52:21 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: IRC, webcam, IM"
},
{
"msg_contents": "On Thu, 13 May 1999, Thomas Lockhart wrote:\n\n> > Let me add one more thing. I typically did not use IRC because it was a\n> > pain to check the window every so often to see if anything was\n> > happening, and I often forgot to even start it up.\n> > I am now using zircon for IRC and tik for AOL/IM. Both programs can be\n> > minimized on startup of netscape, so if nothing happens, I don't even\n> > see they are running.\n> \n> I found and built zircon-1.18; what version are you running? I see\n> scrappy on there but not yourself. Can you remind me which IRC server\n> you had recommended in the past?\n> \n> btw, zircon looks pretty neat.\n\nI usually use one of irc.phoenix.net or irc.rift.com ... servers tend to\nhave \"per domain\" restrictions on who can connect...I believe\nirc.phoenix.net is one that has none except maybe number of connections\nfrom a domain...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 13 May 1999 11:32:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: IRC, webcam, IM"
},
{
"msg_contents": "> > Let me add one more thing. I typically did not use IRC because it was a\n> > pain to check the window every so often to see if anything was\n> > happening, and I often forgot to even start it up.\n> > I am now using zircon for IRC and tik for AOL/IM. Both programs can be\n> > minimized on startup of netscape, so if nothing happens, I don't even\n> > see they are running.\n> \n> I found and built zircon-1.18; what version are you running? I see\n> scrappy on there but not yourself. Can you remind me which IRC server\n> you had recommended in the past?\n\nI am on irc.ais.net, but that shouldn't matter. All sites should see\neach other.\n\nI am on the same one as Marc, and am on now. I am not on all the time\nlike Scrappy, because I have a dialup connection to the Internet, and I\nam not at my computer all the time, so why have zircon/netscape running.\nWhen I am on, you will see me. I am sure we will run into each other\nin the next day or so.\n\n> \n> btw, zircon looks pretty neat.\n\nI thought so. No checking to see if someone said something, and each\ngroup is in its own window. Nice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 May 1999 12:06:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: IRC, webcam, IM"
}
] |
[
{
"msg_contents": "Here is the overview:\n\n I have not seen good program for converting postgres table into DBF file,\n and reverse. That is why I wrote this pg2xbase utility. This is my first\n program written in C++, then there could be some lacks.\n\n\n> On Mon, 10 May 1999, Bruce Momjian wrote:\n> \n> > \n> > Is this address on our web site. It should be.\n> \n> If I had an inkling as to what it is/does ...\n> \n> Vince.\n> \n> > \n> > \n> > \n> > > > > Is there a dbase3 <=> postgresql converter available at all? I found\n> > > > > dbf2sql v2.2 which converts to (but not from) Postgres95, with notes such as\n> > > > > \n> > > > > -p primary:\n> > > > > Select the primary key. You have to give the exact\n> > > > > field-name. (Only for dbf2msql, Postgres95 doesn't have a primary key)\n> > > > > \n> > > > > which of course PostgreSQL does. Anyone know of something more recent?\n> > > > \n> > > > I heared about some project called \"xbase\". Try to search FreshMeat for\n> > > > \"xbase\", \"dbf postgres\" or such...\n> > > > \n> > > http://w3.man.torun.pl/~makler/prog/pg2xbase.html\n> > > this address is worth of checking - as I see - there are here utilities\n> > > for convertion from and to postgres.\n> > > \tHope this helps\n> > > \tRem\n> > > -------------------------------------------------------------------*------------\n> > > Remigiusz Sokolowski e-mail: [email protected] * *\t\t\n> > > -----------------------------------------------------------------*****----------\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 May 1999 22:56:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "web site addition"
},
{
"msg_contents": "\nOn 11-May-99 Bruce Momjian wrote:\n> Here is the overview:\n> \n> I have not seen good program for converting postgres table into DBF file,\n> and reverse. That is why I wrote this pg2xbase utility. This is my first\n> program written in C++, then there could be some lacks.\n\nThe main problem of such conversion is numerouse DBF format versions. \nIt different mainly in memo fields and floating numbers.\n\nSample conversion can easy be done by PHP or PERL, for advanced one\nit's better to use ODBC under MS Win. \n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...\n",
"msg_date": "Tue, 11 May 1999 09:20:45 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] web site addition"
},
{
"msg_contents": "\nOn 11-May-99 Bruce Momjian wrote:\n> Here is the overview:\n> \n> I have not seen good program for converting postgres table into DBF file,\n> and reverse. That is why I wrote this pg2xbase utility. This is my first\n> program written in C++, then there could be some lacks.\n\nHmmmm. I'm wondering. How many others here have written little utilities\nto perform other one time or occasional tasks? We can put them all on one\nweb page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n",
"msg_date": "Tue, 11 May 1999 15:03:10 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] web site addition"
},
{
"msg_contents": "On 11-May-99 Vince Vielhaber wrote:\n> \n> On 11-May-99 Bruce Momjian wrote:\n>> Here is the overview:\n>> \n>> I have not seen good program for converting postgres table into DBF file,\n>> and reverse. That is why I wrote this pg2xbase utility. This is my first\n>> program written in C++, then there could be some lacks.\n> \n> Hmmmm. I'm wondering. How many others here have written little utilities\n> to perform other one time or occasional tasks? We can put them all on one\n> web page.\n\nThis is too tiny utilities to make life easy ;-))\n\nPS:\n I was completlly off-road due setting up our voice->ip gateway\nAre new web site is going on?\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* there will come soft rains ...",
"msg_date": "Fri, 21 May 1999 16:55:57 +0400 (MSD)",
"msg_from": "Dmitry Samersoff <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [HACKERS] web site addition"
}
] |
[
{
"msg_contents": "subscribe\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, and other goodies at\n http://donb.photo.net\n",
"msg_date": "Mon, 10 May 1999 20:22:06 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Very funny :-)\n\n\t-----Original Message-----\n\tFrom:\tThomas Lockhart [SMTP:[email protected]]\n\tSent:\tMonday, May 10, 1999 9:26 PM\n\tTo:\tMichael J Davis\n\tCc:\t'Bruce Momjian'; PostgreSQL-development\n\tSubject:\tRe: [HACKERS] NULL = col\n\n\t> Works for me.\n\n\tYeah, yeah, don't panic... ;)\n\n\t - Thomas\n\n\t-- \n\tThomas Lockhart\t\t\t\[email protected]\n\tSouth Pasadena, California\n",
"msg_date": "Mon, 10 May 1999 22:55:41 -0500",
"msg_from": "Michael J Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [HACKERS] NULL = col"
}
] |
[
{
"msg_contents": "I found this on a web page, and thought it was interesting. Should I\nthrow it into tools or our web page?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n\n\n\n\n\n\nSQL For Multiple DBMS\n\n\n\n\n\n\n\nSQL for Multiple DBMS\r\nBy Rob Kraft\r\nAudience:This document is for people developing applications that access data on different Database Management Systems (DBMS).When developing such applications, you will usually NOT want to take advantage of the enhanced features of a specific DBMS (such as stored procedures), because such features are difficult to maintain across multiple DBMS.Your goal is generally to minimize the customized work you will have to do for each DBMS.You want to minimize the customization of your database administration as well as minimize the customization of the SQL you write.\r\nBackground:I created this document after learning first hand how much variation there is in SQL across DBMS while developing an application.\r\nYour help:I intend this document to never be complete.I hope that some readers will send me corrections where you see that they are needed.Also please send alternate solutions for the problematic areas.If you would like to include issues with additional DBMS, please send that information to me as well.\r\nDBMS:The Database Management Systems currently covered in this document include:\nMicrosoft Access 97 (I believe all information applies to Access 2.0 and Access 95 as well).\nMicrosoft SQL Server 6.5 (I believe all information applies to 4.2, 6.0, and 7.0 as well, except where noted).\nOracle 8.0.4 (I believe all information applies to 7.3 versions as well).\nDB2 5.2\nInformix 7.2\r\nAbout authors:I, Rob Kraft, am the starting author of this document.My hope is to have many of you send contributions and become co-authors of this document.I work for Advanced Technology Solutions in Leawood Kansas.This document is on my personal web site at http://ourworld.compuserve.com/homepages/robkraft/professnl/sqldb.doc.My co-worker Jackie Rager contributed the information about DB2 and Informix.We hope that you benefit from our learning curve.\r\nFreeware:We don't care what you do with this information as long as you give credit where credit is due, especially if you intend to make a profit from this information.\r\nDate:This version of the document is March 23, 1999.\r\nContacts:You can contact the primary author of this document at [email protected].\n\n\n\n\n\n\r\nBy Rob KraftPage 1 of unknown # of pages05/10/99\n\n\n\nSQL For Multiple DBMS\n\n\n\nTable of Contents\n\n\n\n\nString Delimiter*\r\nHow to handle single quotes in your data*\r\nPlacing quotes around numeric data*\r\nCase on Column and Table Names*\r\nCase in SQL keywords*\r\n*Case sensitivity in data*\r\n*Wildcard characters for LIKE keyword*\r\n-Syntax for Inner Join*\r\n*Syntax for Left Join*\r\n-AS clause*\r\n*Concatenation operand*\r\nSyntax of Delete keyword*\r\n*Substring functions*\r\n*Trim functions*\r\nDistinct clause - eliminate duplicate rows*\r\nSupport for automatic counters, identity columns, autonumbers*\r\nSpecial character support in column names*\r\n*Convert to string function*\r\n*Dealing with Dates and Times*\r\n*Retrieving the current date*\r\nConstraints can be used to enforce referential integrity*\r\nCascade Deletes*\r\nReferencing table names that have owners*\r\nSymbol for Not Greater Than and Not Equal To*\r\nSymbols for NULL Testing*\r\nDatatype comparisons*\r\nDBMS Limitations*\r\nMiscellaneous, but IMPORTANT notes*\n\n\n\r\n* Denotes that the problem requires a different solution for different DBMSs AND requires writing different front-end SQL.\r\n- Denotes that my recommendation deviates from ANSI standards.\r\n\n\n\n\n\n\n\n\n\r\nBy Rob KraftPage 2 of unknown # of pages05/10/99\n\n\n\nSQL For Multiple DBMS\n\nString Delimiter\r\nThe string delimiter is used in SQL to let the DBMS know the beginning and end of a value that contains string, text, or character data.The name 'Kraft' is an example of such data.\r\nExample:SELECT * FROM CUSTOMERS WHERE LNAME = 'KRAFT'\r\nRecommendation:Use SINGLE QUOTES as your string delimiter.\r\nANSI SQL-92:Single Quotes is the standard.\r\nAccess 97Supports single quotes or double quotes.\r\nMS SQL Server 6.5Supports single quotes or double quotes (set driver option for double quotes).\r\nOracle 8.0.4Supports single quotes only.\r\nDB2 5.2Supports single quotes only.\r\nInformix 7.2Supports single quotes or double quotes.\r\nAccording to ANSI standards, the double quotes are used to delineate column names.This allows column names with unusual characters, such as spaces, to be identified.\n\n\r\nHow to handle single quotes in your data\r\n\r\n\r\nIf you accept that a single quote is your string delimiter, you should determine how you will handle single quotes contained within your user data.The city name Lee's Summit is an example of such data.\r\nExample:SELECT * FROM CUSTOMERS WHERE CITY = 'LEE''S SUMMIT'\r\nRecommendation:Use TWO SINGLE QUOTES to tell the DBMS to look for one single quote.\r\nANSI SQL-92:Using two single quotes is the standard.\r\nThis method of identifying single quotes in data is accepted by all of these DBMS.\n\n\r\nPlacing quotes around numeric data\r\n\r\n\r\nSome DBMS allow you to place quotes around numeric data as well as string data.This is not the case for most DBMS.\n\r\nExample:SELECT * FROM CUSTOMERS WHERE ID = '15'\r\nRecommendation:Do NOT put quotes around numeric.\r\nANSI SQL-92:Quotes are NOT allowed around numeric data.\r\nAccess 97Quotes ARE allowed around numeric data.\r\nMS SQL Server 6.5Quotes are NOT allowed around numeric data.\r\nOracle 8.0.4Quotes are NOT allowed around numeric data.\r\nDB2 5.2Quotes are NOT allowed around numeric data.\r\nInformix 7.2Quotes ARE allowed around numeric data.\n\n\r\nCase on Column and Table Names\r\n\r\n\r\nAre the table names and column names of the DBMS case sensitive?What should you do when writing SQL to access the tables and columns?It is obviously more work for programmers if they have to know the case of the table and column names when writing their SQL.Therefore I recommend that programmers always use upper case when writing SQL, and for case sensitive databases, the database administrator needs only to make sure upper case is used when creating all the tables.\r\nExample:CREATE TABLE CUSTOMERS (ID…)\r\n\r\nRecommendation:Use UPPER CASE when writing SQL.Use UPPER CASE when creating tables on case-sensitive DBMS.\n\r\nANSI SQL-92:?\r\nAccess 97Names are NOT case sensitive.\r\nMS SQL Server 6.5Names are NOT case sensitive.\r\n\r\nOracle 8.0.4Names ARE case sensitive if the column names are enclosed in double quotes.Note that many ODBC driver place double quotes around column names by default.You may need to download the latest Oracle ODBC driver.\r\n\r\nDB2 5.2Names are NOT case sensitive.\r\nInformix 7.2Names are NOT case sensitive.\n\n\r\nCase in SQL keywords\r\n\r\n\r\nWhat case should you use when typing the SQL keywords of SELECT, WHERE, FROM, etc?I am not aware of the case of the SQL statements being relevant, but I personally use all uppercase.\r\nExample:SELECT * FROM CUSTOMERS WHERE LNAME = 'KRAFT'\r\nRecommendation:Use upper case for SQL keywords.\r\nANSI SQL-92:?\r\nCase is irrelevant to all the DBMS covered here. \n\n\r\n*Case sensitivity in data\r\n\r\n\r\nIs the data stored and accessed in the DBMS case sensitive?Do the results of \"Select * from customers where lname = 'Kraft'\" return the same results as \"Select * from customers where lname = 'KRAFT'\"? \r\nExample:SELECT * FROM CUSTOMERS WHERE LNAME = 'Kraft'\nSELECT * FROM CUSTOMERS WHERE LNAME = 'KRAFT'\r\nRecommendation:Be aware of the differences!!!\r\nANSI SQL-92:?\r\nAccess 97The result sets are the same.\r\nMS SQL Server 6.5The result sets are the same (unless a case-sensitive sort order is used by the DBMS).\r\nOracle 8.0.4The result sets are NOT the same.\r\nDB2 5.2The result sets are NOT the same.\r\nInformix 7.2The result sets are NOT the same.\r\nProgrammatically, the easiest solution is to store all data in upper case.Unfortunately, many users do not want all data to be displayed in upper case.This problem primarily affects where clauses.If you want to store mixed case data, here are some options for getting the case-INSENSITIVE results from your where clauses:\n Create an extra column in the table for each column that your user might search on.Store the uppercase version of the data in the extra column.This requires larger tables and also a different table structure for DBMS that do not support case-insensitivity.\n Use a function to convert the columns to upper case for comparison purposes.For example, in Oracle \"Select lname, fname, state from customers where UPPER(lname) = 'KRAFT'\".The drawbacks of this approach include:1) your SQL is different for different DBMS, 2) you must figure out when to apply the UPPER function to your SQL, and 3) you lose performance and perhaps index utilization by converting the column value.\r\nAs I mentioned in the opening, if you can share a better solution, I would LOVE to hear about it.\n\n\r\n*Wildcard characters for LIKE keyword\r\n\r\n\r\nThe LIKE keyword is used in conjunction with wildcard characters that may vary across DBMS.You may want to retrieve all customers with a last name that begins with a 'K', any value for the second character, has a third character of 'A', and any value for remaining characters.\r\nExample:SELECT * FROM CUSTOMERS WHERE LNAME LIKE = 'K_A%'\r\nRecommendation:Use % for multiple characters, and _ for single characters.\r\nANSI SQL-92:% for multiple characters, and _ for single characters is the standard.\r\n\r\nAccess 97* is used for multiple characters, and ? is used for a single character.However, if you connect to the database using MS OLE DB (or ADO), and you have applied MDAC 2.1 (released 2/99), it will used % and _ instead.\r\n\r\nMS SQL Server 6.5% for multiple characters, and _ for single characters.\r\nOracle 8.0.4% for multiple characters, and _ for single characters.\r\nDB2 5.2% for multiple characters, and _ for single characters.\r\nInformix 7.2% for multiple characters, and _ for single characters.(The MATCHES clause used * and _).\n\n\r\n-Syntax for Inner Join\r\n\r\n\r\nAn inner join is the most common method of joining two tables.An inner join between Customers and Orders would return a result set containing all the customers that had orders, but it would not include customers that had no orders or any orders that did not have a customer (see left join below for that).\r\nExample:SELECT * FROM CUSTOMERS, ORDERS WHERE CUSTOMERS.ID = ORDERS.CUSTID\r\nRecommendation:Do NOT use ANSI standard join syntax unless you will only be using ANSI compliant DBMS.\n\r\nANSI SQL-92:Including the keyword INNER JOIN is the standard.SELECT * FROM CUSTOMERS INNER JOIN ORDERS ON CUSTOMERS.ID = ORDERS.CUSTID\r\n\r\nAccess 97Supports INNER JOIN or join in WHERE clause.\r\nMS SQL Server 6.5Supports INNER JOIN or join in WHERE clause.\r\nOracle 8.0.4Supports WHERE clause join only.\r\nDB2 5.2Supports INNER JOIN or join in WHERE clause.\r\nInformix 7.2Supports INNER JOIN or join in WHERE clause. (check on the syntax!)\r\nAny DBMS will support joining tables based on a condition in the where clause, but there are two drawbacks to doing this:1) Spelling out the INNER JOIN syntax is more readable because you can easily distinguish the join clause from the parts of the where clause used to select which rows to return, and 2) Some DBMS provide incorrect results when joining in the where clause (particularly for queries involving null comparisons) because the DBMS does not know to perform the join criteria prior to the where criteria.\n\n\r\n*Syntax for Left Join\r\n\r\n\r\nA left join (or right join) retrieves ALL the rows from one table even if there is no match in the other table.A left join between Customers and Orders would return a result set containing ALL the customers, even those without orders (the order fields returned would be null), but it would not include any orders that did not have a customer.\r\nExample:SELECT * FROM CUSTOMERS LEFT JOIN ORDERS ON CUSTOMERS.ID = ORDERS.CUSTID\r\nRecommendation:Never use a RIGHT JOIN - use a LEFT JOIN instead.\r\n\nUse ANSI LEFT JOIN syntax.\r\nANSI SQL-92:Including the keyword LEFT JOIN is the standard.\r\n\r\nAccess 97Supports ONLY the ANSI notation for LEFT JOINs.\r\n\r\nMS SQL Server 6.5Supports the ANSI notation.Also has its own notation (Select * from Customers, Orders where customers.ID *= orders.custid).(Notice that *= means left join, =* means right join).\r\nOracle 8.0.4Does NOT support ANSI LEFT JOIN.Has its own notation (Select * from Customers, Orders where customers.ID (+) = orders.custid).(Notice that the (+) follows the table name to be fully joined).\r\n\r\nDB2 5.2Supports ONLY the ANSI notation for LEFT JOINs.\r\nInformix 7.2?\r\nA LEFT JOIN means that you will take ALL the rows from the first table listed (the one on the left) and only matching rows from the second table listed (the one on the right).The reason you don't need a RIGHT JOIN is that you can simple switch the order of the tables in your query to always make it a LEFT JOIN.\r\nIf you will only be using ODBC to access your DBMS, then you may be able to use ODBC extended SQL syntax.This applies particularly if you are going against just SQL Server and Oracle.The ODBC syntax allows you to write the SQL in a single format in your program.That format is:\r\nSELECT * FROM {oj CUSTOMERS LEFT OUTER JOIN ORDERS ON CUSTOMERS.ID = ORDERS.CUSTID}\r\nODBC will then translate the left outer join into the correct format for the DBMS. \n\n\r\n-AS clause\r\n\r\n\r\nThe AS clause is often used in two places, for substituting column names and for substituting table names.While the AS clause is supported by all DBMS here for column names, it is not for table names.\r\nExample:SELECT LNAME AS LASTNAME FROM CUSTOMERS C1 WHERE C1.ID = 51 \r\nRecommendation:Do NOT use the AS clause for table aliases, DO use the AS clause for column aliases\n\r\nANSI SQL-92:The standard is to use the AS clause for BOTH a column alias and a table alias.\r\n\r\nAccess 97AS clause is accepted for both columns and tables, not required for tables.\r\nMS SQL Server 6.5AS clause is accepted for both columns and tables, not required for tables.\r\nOracle 8.0.4AS clause is ONLY accepted for columns, not for table aliases.\r\nDB2 5.2AS clause is accepted for both columns and tables, not required for tables.\r\nInformix 7.2AS clause is ONLY accepted for columns, not for table aliases.\n\n\r\n*Concatenation operand\r\n\r\n\r\nYou may want to combine the data in multiple table columns to a single column in your resultset.This is done by concatenating the columns in the SQL.Unfortunately all DBMS may not support the same concatenation mechanism, therefore I recommend concatenating the columns in the calling program after receiving the resultset whenever possible.\r\nExample:SELECT LNAME || ', ' || FNAME FROM CUSTOMERS WHERE ID = 41\r\nRecommendation:Avoid concatenation in SQL.\n\r\nANSI SQL-92:|| (two straight bars) is the ANSI standard concatenation character.\r\n\r\nAccess 97Does NOT support the ANSI standard.Use & or + instead.\r\nMS SQL Server 6.5Does NOT support the ANSI standard.Use + instead.\r\nOracle 8.0.4Supports the ANSI standard.\r\nDB2 5.2Supports the ANSI standard or the keyword CONCAT.\r\nInformix 7.2Supports the ANSI standard.\n\n\r\nSyntax of Delete keyword\r\n\r\n\r\nSome DBMS allow a column list on a delete command, although it is irrelevant.\r\nExample:DELETE FROM CUSTOMERS WHERE LNAME = 'KRAFT'\r\nRecommendation:Do not list * or columns in the delete command\r\nANSI SQL-92:DELETE FROM is the ANSI standard.\r\nAccess 97Supports ANSI Standard.Also allows \"DELETE * FROM …\"\r\nMS SQL Server 6.5Supports ANSI Standard only.\r\nOracle 8.0.4Supports ANSI Standard only.\r\nDB2 5.2?.\r\nInformix 7.2?.\n\n\r\n*Substring functions\r\n\r\n\r\nMost DBMS offer a function to allow you to extract a range of characters from a string.Unfortunately the command for doing this varies across DBMS.All the functions here take 3 parameters: starting point, number of characters, column name.\r\nExample:SELECT substring(4,2,SSN) FROM CUSTOMERS WHERE ID = 41\r\nRecommendation:Avoid obtaining substrings in SQL.\n\r\nANSI SQL-92:SUBSTRING.\r\n\r\nAccess 97Use MID function.\r\nMS SQL Server 6.5Use SUBSTRING function.\r\nOracle 8.0.4Use SUBSTR function.\r\nDB2 5.2Use SUBSTR function.\r\nInformix 7.2Use T1[n,m]?\n\n\r\n*Trim functions\r\n\r\n\r\nMost DBMS offer a function to allow you to trim spaces from a string.Unfortunately the command for doing this varies across DBMS.\r\nExample:SELECT trim(LNAME) FROM CUSTOMERS WHERE ID = 41\r\nRecommendation:Avoid trimming strings in SQL.\n\r\nANSI SQL-92:TRIM.\r\n\r\nAccess 97Use TRIM, LTRIM, or RTRIM.\r\n\r\nMS SQL Server 6.5Use LTRIM and RTRIM.In 6.5, empty strings are returned as a single space.In 7.0, empty strings will be returned as empty strings (no single space).\r\n\r\nOracle 8.0.4Use LTRIM and RTRIM.\r\nDB2 5.2Use LTRIM and RTRIM.\r\nInformix 7.2Use TRIM (lead, trail, both).\n\n\r\nDistinct clause - eliminate duplicate rows\r\n\r\n\r\nMost DBMS offer a function to allow you to eliminate duplicate rows from the resultset.This may cause the data to be sorted on some DBMS.\r\nExample:SELECT DISTINCT LNAME FROM CUSTOMERS\r\nRecommendation:Use DISTINCT to eliminate duplicate rows.\n\r\nANSI SQL-92:DISTINCT is the ANSI standard.\r\n\r\nAccess 97DISTINCT is supported.An alias is DISTINCTROW.\r\n\r\nMS SQL Server 6.5DISTINCT is supported.\r\n\r\nOracle 8.0.4DISTINCT is supported.\r\nDB2 5.2DISTINCT is supported.\r\nInformix 7.2DISTINCT is supported.An alias is UNIQUE.\n\n\r\nSupport for automatic counters, identity columns, autonumbers\r\n\r\n\r\nAn automatic counter is an ID generated by the DBMS when a row is inserted.The column name of the counter is generally not allowed on the insert, and the distinct value placed in the counter column is maintained by the DBMS.Not all DBMS support a counter that is as easy to use as that provided with Microsoft Access, but the concept can be simulated.\r\nExample:INSERT INTO CUSTOMERS VALUES()\r\n\r\nRecommendation:Use counters but know how to simulate them in each DBMS so that no changes are required to your programs that make calls to the DBMS.\r\nANSI SQL-92:COUNTER().\r\n\n\r\nAccess 97Supported - Autonumber.Begins at 1 and increments by 1.Access 2000 will support the option to begin at any integral value and increment or decrement by any integral value.\r\nMS SQL Server 6.5Supported - Identity column.Start at any integral value and increment or decrement by any integral value.\r\nOracle 8.0.4Not supported.Oracle provides an independent counter object called a SEQUENCE.You must create a SEQUENCE for each counter, then create an insert trigger for each table to call the SEQUENCE and apply the next value to the column in the table.\r\nDB2 5.2Not supported.You will need to create your own table to track the next number to be assigned.Then create an insert trigger for each table to get the next value from your counter table, and update the counter table.\r\nInformix 7.2Supported - Identity column.Start at any integral value and increment or decrement by any integral value.\r\n\n\n\r\nSpecial character support in column names\r\n\r\n\r\nEach DBMS supports non-alphanumeric characters in the names of tables and columns.You will want to select names that are supported across all DBMSs, and this limits your special character options.\r\nExample:CREATE TABLE CUSTOMERS (F_NAME VARCHAR2(20), PHONE# CHAR(15))\r\n\r\nRecommendation:Only use the underscore and alphanumerics in table and column names.Start all table and column names with an alphabetic letter.\r\nANSI SQL-92:?\r\n\n\r\nAccess 97Any character except: .!`[]\r\nMS SQL Server 6.5Only allows: _ $ # \r\nOracle 8.0.4\r\nDB2 5.2\r\nInformix 7.2\n\n\n\r\n*Convert to string function\r\n\r\n\r\nMost DBMS offer a function to allow you to convert a number to a string.Unfortunately the command for doing this varies across DBMS.\r\nExample:SELECT STR(BLDG_NO) & ADDRESS AS FULLADDRESS FROM CUSTOMERS \r\nRecommendation:Avoid convert to string functions in SQL.\n\r\nANSI SQL-92:CAST.\r\n\r\nAccess 97Use STR().\r\n\r\nMS SQL Server 6.5Use STR() or CONVERT().\r\n\r\nOracle 8.0.4Use TO_CHAR()\r\nDB2 5.2\r\nInformix 7.2\n\n\r\n*Dealing with Dates and Times\r\n\r\n\r\nThis is one of the most complex areas to manage.I suggest you resolve how this will be handled first in your development.The question is, how do I let the DBMS know I am providing a date value, and how to I retrieve rows matching a particular date value.Suppose you want to retrieve all rows modified on 2/27/1999 before 8:00.While the query does not sound unreasonable, you may find it very challenging on different servers.For this first case, let's assume the dates and times are stored in different columns.\r\nExample:SELECT * FROM CUSTOMERS WHERE MOD_DT = '2/27/1999' AND MOD_TM < '8:00 AM' \r\nRecommendation:Write a function in your calling program to format the date fields.\r\n\r\nExample:strSQL = \"SELECT * FROM CUSTOMERS WHERE MOD_DT = \" & fnFormatDate (dtBegin, \"Oracle\")\r\nANSI SQL-92:?\r\n\r\nAccess 97Use pound signs (#) to denote date and time data.\r\n\nWHERE MOD_DT =#'2/27/1999# AND MOD_TM < #8:00 AM#\nAccess understands many date formats: mm/dd/yy, mm-dd-yy, yy-dd-mm, mm.dd.yy, etc.\r\nMS SQL Server 6.5Use single quotes (') to denote date and time data.\r\nWHERE MOD_DT = '2/27/1999' AND MOD_TM < '8:00 AM'\nSQL Server understands many date formats: mm/dd/yy, mm-dd-yy, yy-dd-mm, mm.dd.yy, etc.\r\nOracle 8.0.4By default, Oracle only recognizes dd-mmm-yy.You can change the default format to something else, such as mm/dd/yyyy by issuing \"ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYY';\"\nHowever, times stored in date fields expect the same format and therefore must by converted by the SQL.\nUse single quotes (') to denote date and time data.\nWHERE MOD_DT = '2/27/1999' AND TO_DATE(TO_CHAR(MOD_TM, ‘HH:MI:SS PM’), 'HH:MI:SS PM') < '8:00 AM'\r\nDB2 5.2Use single quotes (') to denote date and time data.\r\nWHERE MOD_DT = '2/27/1999' AND MOD_TM < '8:00 AM'\nDB2 understands many date formats: mm/dd/yy, mm-dd-yy, yy-dd-mm, mm.dd.yy, etc.\r\nInformix 7.2Use single quotes (') to denote date and time data.\r\nWHERE MOD_DT = '2/27/1999' AND MOD_TM < '8:00 AM'\nInformix understands many date formats: mm/dd/yy, mm-dd-yy, yy-dd-mm, mm.dd.yy, etc.\r\n\r\nThis is the area I had the most difficulty with for Oracle, I would greatly love to hear about some better approaches.\r\nFor any of these servers, if date and times are stored within the same datetime column, another complexity usually arises.In trying to obtain all rows modified on 2/27/1999, you may be tempted to code WHERE MOD_DTTM = '2/27/1999'.However, since no time was specified, the DBMS assumes you also mean where time is equal to 12:00am and your result set is empty.For this scenario, I recommend this:WHERE MOD_DTTM >= '2/27/1999' AND MOD_DTTM < '2/28/1999'.Notice that in dealing with dates and times, it is almost a certainty that you will need to code your front end (but hopefully your middle tier) differently for each DBMS.\r\nIf you will only be using ODBC to access your DBMS, then you may be able to use ODBC extended SQL syntax.This applies particularly if you are going against just SQL Server and Oracle.The ODBC syntax allows you to write the SQL in a single format in your program.That format is:\r\nSELECT * FROM CUSTOMERS WHERE MOD_DT < {D '1999-02-27'}\r\nODBC will then translate the date into the correct format for the DBMS. \n\n\n\n\n*Retrieving the current date\n\n\r\nMost DBMS offer a function to retrieve the current date.Unfortunately the command for doing this varies across DBMS.\r\nExample:SELECT CURRENT_DATE() \r\nRecommendation:Retrieve the current date from the network OS instead of the DBMS.\n\r\nANSI SQL-92:?\r\n\r\nAccess 97SELECT DATE() FROM TABLE WHERE 1 > 0.You must provide the name of any existing table.\r\n\r\nMS SQL Server 6.5SELECT GETDATE()\r\n\r\nOracle 8.0.4SELECT SYSDATE FROM DUAL\r\nDB2 5.2?\r\nInformix 7.2SELECT DATE(CURRENT)\n\n\r\nConstraints can be used to enforce referential integrity\r\n\r\n\r\nMost DBMS support ANSI standard constraints to manage Primary Key/Foreign Key relationships between tables (referential integrity).This form of integrity is usually more robust and less overhead than any others.\n\r\nExample:ALTER TABLE ORDERS ADD CONSTRAINT FK1 FOREIGN KEY (CUSTID) REFERENCES CUSTOMERS (ID)\r\n\r\nRecommendation:Use PK and FK to enforce referential integrity.\n\r\nANSI SQL-92:Supported - syntax as shown above is very similar across DBMS.\r\n\r\nAccess 97Supported\r\n\r\nMS SQL Server 6.5Supported\r\n\r\nOracle 8.0.4Supported\r\nDB2 5.2Supported\r\nInformix 7.2Supported\n\n\r\nCascade Deletes\r\n\r\n\r\nSome DBMS allow your database relationships to cascade deletes that occur.For example, if a customer is deleted, and cascade delete is enable for the customer to orders relationship, then the orders for that customer will also be automatically deleted.If cascade delete is not enabled, the delete for the customer will fail if orders for the customer exist.Cascading of a delete is a business rule and should probably be handled in the business objects, however DBMS do such a good job of it cascading deletes that it makes good sense to use the DBMS to perform the action.The drawback is that different approaches are required on each DBMS to create a cascade delete, but fortunately it does not affect the SQL in your calling programs.Many DBMS allow you to create a CASCADE DELETE via a CONSTRAINT, or a TRIGGER.In every case I would recommend a CONSTRAINT over a TRIGGER.They are easier to maintain and are more efficient.\n\r\nExample:DELETE FROM ORDERS WHERE CUSTID = 41\r\nDELETE FROM CUSTOMERS WHERE ID = 41\r\n\r\nRecommendation:Use business objects to enforce cascade deletes.\n\r\nANSI SQL-92:The CASCADE DELETE constraint is defined by ANSI.\r\n\r\nAccess 97Cascade deleted can be implemented by constraints - a property of the relationship definition.\r\n\r\nMS SQL Server 6.5Cascade delete can only be implemented by triggers.This is true in SQL Server 7.0 as well.When applying a cascade delete trigger you will have to remove the PK/FK constraint or else the PK/FK constraint will prevent the delete trigger first.If you remove the PK/FK constraints, you will need to add INSERT and UPDATE triggers to enforce referential integrity.\r\nOracle 8.0.4Cascade deleted can be implemented by constraints - a property of the relationship definition.They can also be implemented by triggers.\r\n\r\nDB2 5.2?\r\nInformix 7.2?\n\n\r\nReferencing table names that have owners\r\n\r\n\r\nOn many DBMS the same table name can be created by multiple people because the DBMS distinguishes each table by the table owner name.This can complicate the writing of SQL if you have to account for the owner of the table name.You definitely want to have all your tables owned by the same account to make it unnecessary to adjust your SQL.\n\r\nExample:SELECT * FROM ROB.CUSTOMERS WHERE LNAME = 'KRAFT'\r\n\r\nRecommendation:Do not specify the table OWNER name for every column/table listed in the SQL.\n\r\nANSI SQL-92:not applicable\r\n\r\nAccess 97Table owner names are not applicable.\r\n\r\nMS SQL Server 6.5Make dbo the owner of all tables.\r\nOracle 8.0.4Have a single account create all the tables.Create a SYNONYM for accessing the tables without specifying the owner account name.\r\n\r\nDB2 5.2?\r\nInformix 7.2?\n\n\r\nSymbol for Not Greater Than and Not Equal To\r\n\r\n\r\nSome DBMS support multiple ways of specifying \"Not Greater Than\", or \"Not Equal To\".Fortunately the ANSI standard works on all the DBMS mentioned here.\n\r\nExample:SELECT * FROM CUSTOMERS WHERE ID <= 41 AND ID <> 15\r\n\r\nRecommendation:Use <= for \"Not Greater Than\" and <> for \"Not Equal To\".\n\r\nANSI SQL-92:Use <= for \"Not Greater Than\" and <> for \"Not Equal To\".\r\n\r\nAccess 97<= and <>\r\n\r\nMS SQL Server 6.5<= and <>Also supports !> and !=\r\nOracle 8.0.4<= and <>Also supports !=\r\n\r\nDB2 5.2<= and <>\r\nInformix 7.2<= and <>Also supports !=\n\n\r\nSymbols for NULL Testing\r\n\r\n\r\nSome DBMS support multiple ways of specifying NULL or NOT NULL for comparisons.Fortunately the ANSI standard works on all the DBMS mentioned here.\n\r\nExample:SELECT * FROM CUSTOMERS WHERE FNAME IS NULL\r\n\r\nRecommendation:Use IS NULL and IS NOT NULL\n\r\nANSI SQL-92:IS NULL and IS NOT NULL\r\n\r\nAccess 97IS NULL and IS NOT NULL \r\nMS SQL Server 6.5IS NULL and IS NOT NULL.Also supports = NULL and <> NULL.\r\n\r\nOracle 8.0.4IS NULL and IS NOT NULL\r\n\r\nDB2 5.2\r\nInformix 7.2IS NULL and IS NOT NULL\n\n\n\r\nDatatype comparisons\r\n\r\n\r\nYou also need to be cautious in choosing column datatypes.When writing applications that will run on multiple DBMS you need to select datatypes that are supported across all the DBMS.Here are some of the datatypes I would recommend.\n\n\n\n\nDatatypes\n\n\nAccess 97\n\n\nMicrosoft SQL Server 6.5\n\n\nOracle 8.04\n\n\nDB2 7.2\n\n\nInformix 7.2\n\n\n\n\nInteger (-2 billion to +2 billion)\n\n\nLong Integer\n\n\nint\n\n\nNumber(10,0)\n\n\nInteger\n\n\nInteger\n\n\n\n\nSmallInt (-32000 to +32000)\n\n\nInteger\n\n\nsmallint\n\n\nNumber(5,0)\n\n\nSmallint\n\n\nSmallint\n\n\n\n\nVarChar (Text data like names)\n\n\nText\n\n\nvarchar\n\n\nvarchar2\n\n\nVarchar\n\n\nVarchar\n\n\n\n\nBit (1 or 0)\n\n\nYes/No\n\n\nbit\n\n\nNumber(1,0)\n\n\n?\n\n\n?\n\n\n\n\nDates and Times\n\n\nDate/Time\n\n\ndatetime\n\n\nDate\n\n\nTimestamp\n\n\nDatetime\n\n\n\n\nFloating Point Numbers\n\n\nDouble\n\n\nfloat or decimal\n\n\nDouble Precision\n\n\nDouble\n\n\nFloat or decimal(p)\n\n\n\n\nCurrency\n\n\nCurrency\n\n\nmoney\n\n\nNumber(9,2)\n\n\nDecimal(n,2)\n\n\nmoney\n\n\n\n\nDefault values for bit?\n\n\nDefault of Yes or No (but use 1 or 0)\n\n\nDefault of 1 or 0\n\n\nDefault of 1 or 0\n\n\nDefault of 1 or 0\n\n\nDefault of 1 or 0\n\n\n\n\nBLOBs/Memos/ Large Datatypes\n\n\nMemo\n\n\ntext\n\n\nLong\n\n\nBlob/clob/dbclob\n\n\nBlob/Text\n\n\n\n\n \r\n\n\nMany per table\n\n\nMany per table\n\n\n1 per table (at end of table???)\n\n\n?\n\n\n?\n\n\n\n\r\nNotes:\r\n\r\n1. For SQL Server, and perhaps other DBMS, bit datatypes cannot be NULL (though they can be NULL in SQL 7.0).I recommend making bit datatypes a required field in all DBMS, and applying a default value to the field in the DBMS.\r\n\r\n2. Oracle only allows one LONG field per table (MEMO, TEXT, BLOB).For this reason, and for performance reasons, I recommend creating a separate two-column table to store in one column the BLOB, and in the other column the ID used to retrieve it.\r\n\r\n\r\n\n\r\nDBMS Limitations\r\n\r\n\r\nYou also need to be aware of the limitations of each DBMS.Take particular note of the maximum length of column and table names, because if you use a 34 character table name on one DBMS, the name may be too long for the other DBMS.Along with that, try very hard to avoid any table or column names that could be a keyword on any DBMS.\n\n\n\nLimitations\n\n\nRecommendation\n\n\nAccess 97\n\n\nMicrosoft SQL Server 6.5\n\n\nOracle 8.04\n\n\nDB2 7.2\n\n\nInformix 7.2\n\n\n\n\nColumn name lengths\n\n\nMaximum of 14\n\n\n64\n\n\n30\n\n\n?\n\n\n18\n\n\n18\n\n\n\n\nNumber of fields in a table\n\n\n \r\n\n\n255\n\n\n250 (1024 in 7.0)\n\n\n1000\n\n\n500\n\n\n \r\n\n\n\n\nNumber of characters in a record (excluding Memo and OLE Object fields)\n\n\n \r\n\n\n2000\n\n\n1962 (8060 in SQL 7.0)\n\n\n \r\n\n\n4005\n\n\n \r\n\n\n\n\nTable size\n\n\n \r\n\n\n1 gigabyte\n\n\n1 terabyte (1,000,000 TB 7)\n\n\n \r\n\n\n64 * 999 gigabytes\n\n\n \r\n\n\n\n\nNumber of characters in a Text field\n\n\n \r\n\n\n255\n\n\n255 (8000 in SQL 7.0)\n\n\nchar = 2k, varchar = 4k\n\n\nchar=254 varchar=4k longvarchar=32k\n\n\n255\n\n\n\n\nNumber of characters in a Memo field\n\n\n \r\n\n\n65,535 or 1 gigabyte\n\n\n2 gigabytes\n\n\n4 gigabytes\n\n\n2 gigabytes\n\n\n2 gigabytes\n\n\n\n\nNumber of indexes in a table\n\n\n \r\n\n\n32\n\n\n250\n\n\n \r\n\n\n32767\n\n\n \r\n\n\n\n\nNumber of fields in an index\n\n\n \r\n\n\n10\n\n\n16\n\n\n \r\n\n\n16\n\n\n \r\n\n\n\n\nLongest SQL Stmt\n\n\n2k - use insert followed by update for more\n\n\n64k\n\n\n128k\n\n\n64k\n\n\n32767\n\n\n \r\n\n\n\n\nMost columns in select list\n\n\n \r\n\n\n255\n\n\n4096\n\n\n250?\n\n\n500\n\n\n \r\n\n\n\n\n\r\nMiscellaneous, but IMPORTANT notes\r\n\n\r\n\r\n1. The result of a concatenation with a NULL value may vary across DBMS.In some the result will return a NULL for the entire value, in others it will return the non-null concatenated values.This behavior changed for SQL Server between versions 6.5 and 7.0.\r\n\r\n2. The driver you use to access the DBMS may alter the SQL before it gets there.This is particularly true if you pass the SQL through ODBC.\r\n\r\n3. The SQL Syntax for Crystal Reports and other reporting engines may be drastically different than what you send directly from your program to the DBMS.You should evaluate other DBMS interfaces like reporting packages that you will use when designing your application.\r\n\r\n4. Microsoft's new data access approach OLE/DB (ADO) may help or hurt your efforts.You will have to see what form of SQL the OLE/DB interface expects for each DBMS.An important note for Microsoft Access developers is that the newest version of Microsoft's data access approach (MDAC 2.1) changes the LIKE wildcards from * and ? to the ANSI standard % and _.\r\n\r\n5. Using Linked tables in Access is one mechanism for avoiding having to write different SQL for different DBMS.However, contrary to what you may read, this is not always a fast approach.For some types of queries it is faster than native connections to the back-end SQL Server or Oracle server, but for others, particularly joins, it is intolerably long.To test this, create links to two tables on your back-end server that are related and have a lot of rows.Write an SQL that joins the tables and returns all rows.Run the query and issue a MoveLast command.Go to lunch.Come back and check the results.\n\n\n\n\r\nBy Rob KraftPage 3 of unknown # of pages05/10/99\n\n\n\r\nDocument converted from word 8 by \r\nMSWordView (mswordview 0.5.9)\r\nMSWordView written by Caolan McNamara",
"msg_date": "Tue, 11 May 1999 01:10:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "database comparisons"
}
] |
[
{
"msg_contents": "I am reading comp.databases.informix, and someone is asking for a tool\nlike pgaccess for Informix:\n\n\tIs there a utility for Informix similar to\n\tPGaccess? (Used on PostGreSQL)\n\nMan, that's market penetration, when people are asking for our tools on\ncommercial databases.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 01:30:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgaccess and Informix"
}
] |
[
{
"msg_contents": "Sereval days I can't compile 6.5 cvs sources under Linux, \nI got sporadical errors so I suspected it's compiler/hardware\nproblem. I'm using egcs 1.12 release.\nI moved to another Linux box, download cvs again, used gcc 2.7.2\nand still has a problem:\n\nmake[2]: Entering directory /usr2/u/postgres/cvs/pgsql/src/backend/postmaster'\ngcc -I../../include -I../../backend -O2 -m486 -Wall -Wmissing-prototypes -I.. -c postmaster.c -o postmaster.o\npostmaster.c: In function \u0013erverLoop':\npostmaster.c:665: too few arguments to function \u0007ettimeofday'\npostmaster.c:704: too few arguments to function \u0007ettimeofday'\npostmaster.c:663: warning: unused variable \u0014z'\npostmaster.c: In function \u0004oBackend':\npostmaster.c:1503: too few arguments to function \u0007ettimeofday'\npostmaster.c:1456: warning: unused variable \u0014z'\nmake[2]: *** [postmaster.o] Error 1\n\nHas anyone successfully compiled latest cvs sources under Linux or these\nproblems are just mine ?\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 11 May 1999 10:45:24 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "problem compiling 6.5 cvs (Linux, gcc 2.7.2, egcs 1.12)"
},
{
"msg_contents": "On Tue, 11 May 1999, Oleg Bartunov wrote:\n> Sereval days I can't compile 6.5 cvs sources under Linux, \n> I got sporadical errors so I suspected it's compiler/hardware\n> problem. I'm using egcs 1.12 release.\n\nI have had no problems with redhat 5.2(gcc 2.7.2.3, glibc 2.0)\nBut with redhat 6.0(egcs 1.1.2, glibc 2.1) I'm having one problem.\npsql doesn't compile :)\npsql.c:152: initializer element is not constant\nwhich is \nstatic FILE * cur_cmd_source = stdin;\n\nI don't believe I have ever seen that error, and I don't know why it has\nproblems with that line.\n\nOther than that it works great here.\n\nOle Gjerde\n\n\n",
"msg_date": "Tue, 11 May 1999 04:24:50 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2, egcs\n\t1.12)"
},
{
"msg_contents": "On Tue, 11 May 1999, Oleg Bartunov wrote:\n> Something is weird for me. After posting my problems with compiling 6.5 cvs\n> I compiled the same cvs source at home using egcs-1.12 release.\n> Everything was ok and regression tests passed as usual with some of\n> them failed (int2,int4, triggers etc). At home I run Linux 2.2.7 while at\n> work I use Linux 2.0.36. Could this be a problem ?\n\nShouldn't be. However, have been using 2.2.x for a while, so I'm not\nsure. It unlikely tho, since I'm sure many people are still running\n2.0.36..\n\nAre you sure /usr/include/linux is a symlink to\n/usr/src/linux/include/linux?\n\nWhat glibc version are you running and when did you update your CVS\nsource last?\n\n----\nI did find a solution to the psql.c compiling problem with glibc 2.1.\n\nIf you change the\nstatic FILE * cur_cmd_source = stdin;\nto\nstatic FILE * cur_cmd_source;\n\nand then add\ncur_cmd_source = stdin;\nto main() it seems to both compile and work fine. Maybe I'm too tired to\nfigure out that static is making a difference in this case :)\n\nOle Gjerde\n\n\n",
"msg_date": "Tue, 11 May 1999 05:01:14 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2, egcs\n\t1.12)"
},
{
"msg_contents": "Something is weird for me. After posting my problems with compiling 6.5 cvs\nI compiled the same cvs source at home using egcs-1.12 release.\nEverything was ok and regression tests passed as usual with some of\nthem failed (int2,int4, triggers etc). At home I run Linux 2.2.7 while at\nwork I use Linux 2.0.36. Could this be a problem ?\n\n\tRegards,\n\n\t\tOleg\n\nOn Tue, 11 May 1999 [email protected] wrote:\n\n> Date: Tue, 11 May 1999 04:24:50 -0500 (CDT)\n> From: [email protected]\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2, egcs 1.12)\n> \n> On Tue, 11 May 1999, Oleg Bartunov wrote:\n> > Sereval days I can't compile 6.5 cvs sources under Linux, \n> > I got sporadical errors so I suspected it's compiler/hardware\n> > problem. I'm using egcs 1.12 release.\n> \n> I have had no problems with redhat 5.2(gcc 2.7.2.3, glibc 2.0)\n> But with redhat 6.0(egcs 1.1.2, glibc 2.1) I'm having one problem.\n> psql doesn't compile :)\n> psql.c:152: initializer element is not constant\n> which is \n> static FILE * cur_cmd_source = stdin;\n> \n> I don't believe I have ever seen that error, and I don't know why it has\n> problems with that line.\n> \n> Other than that it works great here.\n> \n> Ole Gjerde\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 11 May 1999 14:01:40 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2,\n\tegcs 1.12)"
},
{
"msg_contents": "[email protected] writes:\n> But with redhat 6.0(egcs 1.1.2, glibc 2.1) I'm having one problem.\n> psql doesn't compile :)\n> psql.c:152: initializer element is not constant\n> which is \n> static FILE * cur_cmd_source = stdin;\n\n> I don't believe I have ever seen that error, and I don't know why it has\n> problems with that line.\n\nYou have a broken compiler IMHO --- on any reasonable system, stdin\nshould be a load-time constant address, and load-time constants are\nrequired by the standard to be acceptable initializers for statics.\nEither stdin is defined in a very peculiar way, or the compiler is\nincapable of handling non-compile-time-constant initializers. In\neither case it's gonna fail on a lot more things than just Postgres.\n\nHowever, it's easy enough to work around it (as you noted in your\nfollowup). I'll commit the change.\n\n\nRe Oleg's original complaint: I'm not seeing any problem with a\nCVS fileset that I pulled afresh on Saturday morning. I think\nhe somehow got a corrupted copy...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 10:35:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2,\n egcs 1.12) "
},
{
"msg_contents": "On Tue, 11 May 1999, Tom Lane wrote:\n\n> Date: Tue, 11 May 1999 10:35:55 -0400\n> From: Tom Lane <[email protected]>\n> To: [email protected]\n> Cc: Oleg Bartunov <[email protected]>, [email protected]\n> Subject: Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2, egcs 1.12) \n> \n> [email protected] writes:\n> > But with redhat 6.0(egcs 1.1.2, glibc 2.1) I'm having one problem.\n> > psql doesn't compile :)\n> > psql.c:152: initializer element is not constant\n> > which is \n> > static FILE * cur_cmd_source = stdin;\n> \n> > I don't believe I have ever seen that error, and I don't know why it has\n> > problems with that line.\n> \n> You have a broken compiler IMHO --- on any reasonable system, stdin\n> should be a load-time constant address, and load-time constants are\n> required by the standard to be acceptable initializers for statics.\n> Either stdin is defined in a very peculiar way, or the compiler is\n> incapable of handling non-compile-time-constant initializers. In\n> either case it's gonna fail on a lot more things than just Postgres.\n> \n> However, it's easy enough to work around it (as you noted in your\n> followup). I'll commit the change.\n> \n\nI have no problem with that specific case, but I had a problem \nwith compiling latest cvs which arises from wrong configure suggestion - \nI had to make change in config.h\n#define HAVE_GETTIMEOFDAY_2_ARGS 1\nto compile cvs sources. This happens only recently because \nI've never seen this problem.\n\n> \n> Re Oleg's original complaint: I'm not seeing any problem with a\n> CVS fileset that I pulled afresh on Saturday morning. I think\n> he somehow got a corrupted copy...\n\nI too, at home :-) At work I still have sporadical errors in compiling\non one of my machine (Linux 2.0.36, libc5 5.0.46, egcs 1.12 -release).\nOn another machine (Linux 2.0.36, gcc 2.7.2.3, ibc5 5.0.46) after \n trivial change in config.h (see above) I got fully workable postgres :-)\nAll sources were identical. I did resync of cvs. So, actually \nI have weird problem only on one machine and I'll see what were changed\nin my setup.\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 11 May 1999 19:31:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2,\n egcs 1.12) "
},
{
"msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I have no problem with that specific case, but I had a problem \n> with compiling latest cvs which arises from wrong configure suggestion - \n> I had to make change in config.h\n> #define HAVE_GETTIMEOFDAY_2_ARGS 1\n> to compile cvs sources. This happens only recently because \n> I've never seen this problem.\n\nHmm, that's annoying isn't it? Would you please look at configure.in\nand see if you can find out why it's making the wrong guess on your\nsystem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 13:53:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problem compiling 6.5 cvs (Linux, gcc 2.7.2,\n egcs 1.12) "
}
] |
[
{
"msg_contents": "\nGabriele Neukam schrieb in Nachricht <[email protected]>...\n>On Sat, 8 May 1999 22:58:01 +0200, \"GreatFreak\" <[email protected]>\n>wrote:\n>\n>>\n>>\n>Wo liegt eigentlich .ru?\n>\nRUssia\n\n",
"msg_date": "Tue, 11 May 1999 13:21:56 +0200",
"msg_from": "\"Heinz Ekker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Frage!"
}
] |
[
{
"msg_contents": "Another cool cd's with software noticed in\nhttp://www.bcity.com/cdromsoftware\n\n\n",
"msg_date": "Tue, 11 May 1999 15:49:14 +0400",
"msg_from": "\"Larry Cage\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "nice software shopping"
}
] |
[
{
"msg_contents": "\n> I have applied a new refint from Massimo. Can people check that and\n> make the needed fixes to it and trigger.sql?\n> \nSince you seem to want this functionality, could you alter\ncheck_primary_key, \nso that it chooses a default action of \"dependent\", and not force us to\nspecify an action ?\n\nThanx\nAndreas\n",
"msg_date": "Tue, 11 May 1999 14:08:37 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: [HACKERS] misc and triggers regression tests failed on 6.\n\t5bet a1"
},
{
"msg_contents": "> \n> > I have applied a new refint from Massimo. Can people check that and\n> > make the needed fixes to it and trigger.sql?\n> > \n> Since you seem to want this functionality, could you alter\n> check_primary_key, \n> so that it chooses a default action of \"dependent\", and not force us to\n> specify an action ?\n\nI applied it because it was supplied to me. I have no opinion on way or\nthe other. If you want to reverse this all out, feel free, or add the\ndependent option.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 11:07:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: [HACKERS] misc and triggers regression tests failed on 6.\n\t5bet a1"
}
] |
[
{
"msg_contents": "[email protected] writes:\n> I tried to create the table below using psql, but it bombed out\n> with a message about loosing the backend, though the backend was\n> still running nicely. It seems to be a problem with the long\n> field name of the serial (and primary key) column.\n\nYou didn't say which version you are using, but 6.5-current returns a\nmore helpful error message:\n\nERROR: CREATE TABLE/SERIAL implicit sequence name must be less than 32 characters\n Sum of lengths of 'globalafvigelse' and 'globalafvigelse' must be less than 27\n\nThis is forced by the naming conventions for the underlying sequence and\nindex objects, which look like \"TABLE_FIELD_seq\" and so forth.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 10:01:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [INTERFACES] Bug in psql? "
},
{
"msg_contents": "> [email protected] writes:\n> > I tried to create the table below using psql, but it bombed out\n> > with a message about loosing the backend, though the backend was\n> > still running nicely. It seems to be a problem with the long\n> > field name of the serial (and primary key) column.\n> \n> You didn't say which version you are using, but 6.5-current returns a\n> more helpful error message:\n> \n> ERROR: CREATE TABLE/SERIAL implicit sequence name must be less than 32 characters\n> Sum of lengths of 'globalafvigelse' and 'globalafvigelse' must be less than 27\n\n\nHmm, this is rather user unfriendly (but at least an accurate error\nmessage.) It's also not compatible, I think, with other RDBMS that allow\n'serial' types, is it? Any problem with truncating the field name?\nI.e. are there are places in the code that build this sequence name,\nrather than looking it up by oid or some such? If not, shorten it, I say!\n\nWell, at least, add it to the TODO list for testing - see if anything\nbreaks if we just hack it off at 27 chars. Same goes for all the implicit\nindicies, I guess.\n\nHmm, this raises another point: problem with serial in 6.4.2 with MixedCase\ntable of field names (wrapped for your email viewing pleasure):\n\ntest=> create table \"TestTable\" (\"Field\" serial primary key, some text);\nNOTICE: CREATE TABLE will create implicit sequence TestTable_Field_seq\nfor SERIAL column TestTable.Field\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\nTestTable_pkey for table TestTable\nCREATE\ntest=> insert into \"TestTable\" (some) values ('test text');\nERROR: testtable_field_seq.nextval: sequence does not exist\ntest=> \\ds\n\nDatabase = test\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | reedstrm | TestTable_Field_seq | sequence |\n +------------------+----------------------------------+----------+\ntest=> \n\nAnybody test this on 6.5? \n\nI seem to remember it being reported many months ago in another context\n- ah yes, the problem was using a functionname as a defualt which had\nmixed case in it. In that case, the standard quoting didn't seem to\nwork, either. I think it was resolved. Anyone remember?\n\nRoss (a.k.a. Mister MixedCase)\n\nP.S. the mixed case mess comes from prototyping in MS-Access, and transfering\nto PostGreSQL. Given the number of Access Q.s that've been turning up, I bet\nwe see a lot of this.\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 11 May 1999 11:52:33 -0500 (CDT)",
"msg_from": "[email protected] (Ross J. Reedstrom)",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] Bug in psql?"
}
] |
[
{
"msg_contents": "With current sources:\n\nregression=> create table t1 ( f1 serial primary key );\nNOTICE: CREATE TABLE will create implicit sequence t1_f1_seq for SERIAL column t1.f1\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index t1_pkey for table t1\nCREATE\n\nOK so far ...\n\nregression=> create table t2 ( f1 serial,\nregression-> primary key (f1) );\nNOTICE: CREATE TABLE will create implicit sequence t2_f1_seq for SERIAL column t2.f1\nNOTICE: CREATE TABLE/UNIQUE will create implicit index t2_f1_key for table t2\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index t2_pkey for table t2\nCREATE\n\nAnd, indeed, it's made two separate indexes on t2's f1 field. This is\na bug, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 10:05:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "SERIAL + PRIMARY KEY = redundant indexes"
},
{
"msg_contents": "> regression=> create table t2 ( f1 serial, primary key (f1) );\n> NOTICE: CREATE TABLE will create implicit sequence t2_f1_seq for SERIAL column t2.f1\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index t2_f1_key for table t2\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index t2_pkey for table t2\n> CREATE\n> And, indeed, it's made two separate indexes on t2's f1 field. This is\n> a bug, no?\n\nSi. I'll look at it.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 15:40:38 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL + PRIMARY KEY = redundant indexes"
},
{
"msg_contents": "fwiw, there is this comment in the code:\n\n * Note that this code does not currently look for all possible\nredundant cases\n * and either ignore or stop with warning. The create might fail\nlater when\n * names for indices turn out to be duplicated, or a user might have\nspecified\n * extra useless indices which might hurt performance. - thomas\n1997-12-08\n\nBut I should (probably) be able to fix this particular case.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Tue, 11 May 1999 16:10:53 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SERIAL + PRIMARY KEY = redundant indexes"
}
] |
[
{
"msg_contents": " While looking at all these parsetrees I wonder why the hell\n the GroupClause contains a complete copy of the TLE at all?\n The planner depends on finding a corresponding entry in the\n targetlist which should contain the same expression. At least\n it needs an equal junk TLE. For the query\n\n SELECT a, b FROM t1 GROUP BY b + 1;\n\n the parser in fact creates 3 TLE's where the last one is a\n junk result named \"resjunk\" for the \"b + 1\" expression and\n the GroupClause contains a totally equal TLE.\n\n Could someone explain that please?\n\n Wouldn't it be better to have another field (resgroupno e.g.)\n in the resdom which the GroupClause can reference? Then\n changing the resno's or even replacing the entire expression\n wouldn't hurt because make_subplanTargetList() could match\n them this way and the expressions for the subplans can be\n pulled out directly from the targetlist. And it would save\n processing the group clauses in the rewriting because they\n cannot contain Var nodes anymore and the entire list can be\n ignored.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Tue, 11 May 1999 19:07:31 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "More on GROUP BY"
},
{
"msg_contents": "> While looking at all these parsetrees I wonder why the hell\n> the GroupClause contains a complete copy of the TLE at all?\n> The planner depends on finding a corresponding entry in the\n> targetlist which should contain the same expression. At least\n> it needs an equal junk TLE. For the query\n> \n> SELECT a, b FROM t1 GROUP BY b + 1;\n> \n> the parser in fact creates 3 TLE's where the last one is a\n> junk result named \"resjunk\" for the \"b + 1\" expression and\n> the GroupClause contains a totally equal TLE.\n> \n> Could someone explain that please?\n> \n> Wouldn't it be better to have another field (resgroupno e.g.)\n> in the resdom which the GroupClause can reference? Then\n> changing the resno's or even replacing the entire expression\n> wouldn't hurt because make_subplanTargetList() could match\n> them this way and the expressions for the subplans can be\n> pulled out directly from the targetlist. And it would save\n> processing the group clauses in the rewriting because they\n> cannot contain Var nodes anymore and the entire list can be\n> ignored.\n\nI think I can comment on this. Aggregates had the similar problem. It\nwas so long ago, I don't remember the solution, but it was a pain to\nkeep the aggs up-to-date with the target list and varno changes. If you\nthink a redesign will fix the problem, go ahead.\n\nI think the old problem may have been that the old Aggreg's kept\npointers to matching target list entries, so there was the aggregate in\nthe target list, and another separate list of aggregates in the Query\nstructure. I think I removed the second copy, and just generated it in\nthe executor, where it was needed.\n\nPlease see parser/parse_agg.c for a description of how count(*) is\nhandled differently than other aggregates.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 14:01:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] More on GROUP BY"
},
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> While looking at all these parsetrees I wonder why the hell\n> the GroupClause contains a complete copy of the TLE at all?\n> The planner depends on finding a corresponding entry in the\n> targetlist which should contain the same expression. At least\n> it needs an equal junk TLE. For the query\n> SELECT a, b FROM t1 GROUP BY b + 1;\n> the parser in fact creates 3 TLE's where the last one is a\n> junk result named \"resjunk\" for the \"b + 1\" expression and\n> the GroupClause contains a totally equal TLE.\n\n> Could someone explain that please?\n\nAll true, but so what? It wastes a few bytes of memory during\nplanning, I suppose...\n\n> Wouldn't it be better to have another field (resgroupno e.g.)\n> in the resdom which the GroupClause can reference? Then\n> changing the resno's or even replacing the entire expression\n> wouldn't hurt because make_subplanTargetList() could match\n> them this way and the expressions for the subplans can be\n> pulled out directly from the targetlist. And it would save\n> processing the group clauses in the rewriting because they\n> cannot contain Var nodes anymore and the entire list can be\n> ignored.\n\nI think I like better the idea of leaving the representation alone,\nbut using equal() on the exprs to match groupclause items to targetlist\nentries. That way, manipulation of the targetlist can't accidentally\ncause the grouplist to look like it contains something different than\nwhat it should have. It doesn't bother me that the planner can fail\nif it is unable to match a group item to a targetlist item --- that's\na good crosscheck that nothing's gone wrong. (But matching on just\nthe resno is unsafe, as you said before.)\n\nI think it's true that using a TLE for each grouplist item is a waste of\nspace, and that representing the grouplist as simply a list of expr's\nwould be good enough. But pulling out the TLE decoration seems like\nit's not an appropriate use of time at this stage of the release cycle.\nI'd say hold off till after 6.5, then fold it in with the parsetree\nredesign that you keep muttering we need (I agree!).\n\nBTW, you keep using the term \"RTE\" which I'm not familiar with ---\nI assume it's just referring to the parse tree nodes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 May 1999 14:30:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] More on GROUP BY "
},
{
"msg_contents": "> > Wouldn't it be better to have another field (resgroupno e.g.)\n> > in the resdom which the GroupClause can reference? Then\n> > changing the resno's or even replacing the entire expression\n> > wouldn't hurt because make_subplanTargetList() could match\n> > them this way and the expressions for the subplans can be\n> > pulled out directly from the targetlist. And it would save\n> > processing the group clauses in the rewriting because they\n> > cannot contain Var nodes anymore and the entire list can be\n> > ignored.\n> \n> I think I like better the idea of leaving the representation alone,\n> but using equal() on the exprs to match groupclause items to targetlist\n> entries. That way, manipulation of the targetlist can't accidentally\n> cause the grouplist to look like it contains something different than\n> what it should have. It doesn't bother me that the planner can fail\n> if it is unable to match a group item to a targetlist item --- that's\n> a good crosscheck that nothing's gone wrong. (But matching on just\n> the resno is unsafe, as you said before.)\n\nThe real problem is that it is hard to keep all these items synchronized\nas they pass around through the stages. I looked at the aggregates, and\nit looks like that has a separate copy too, so it may not be that bad. \nWe may just be missing a pass that makes appropriate changes in GROUP\nclauses, but I am not sure.\n\n\n> I think it's true that using a TLE for each grouplist item is a waste of\n> space, and that representing the grouplist as simply a list of expr's\n> would be good enough. But pulling out the TLE decoration seems like\n> it's not an appropriate use of time at this stage of the release cycle.\n> I'd say hold off till after 6.5, then fold it in with the parsetree\n> redesign that you keep muttering we need (I agree!).\n\nBasically, it is my fault that I am bringing up the issue so late. If I\nhad done the Open Items list earlier, we would not be so close to the\nfinal.\n\nJan is currently researching it, and we have the regression tests and\nthree weeks. He usually does a fine job of fixing things. Jan, find\nout what you think is required to get this working, and if it is not too\nbad, maybe we should go ahead.\n\nWhat does the Aggreg do? Does it has similar duplication?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 15:01:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] More on GROUP BY"
},
{
"msg_contents": "Tom Lane wrote:\n\n> All true, but so what? It wastes a few bytes of memory during\n> planning, I suppose...\n\n Not only memory, rewriting takes time too. If the GROUP BY\n clause has view columns, the rewriter has to change all the\n expressions twice, one time in the targetlist, another time\n in the group clause. Wasted efford if they should still be\n equal after rewriting.\n\n> I think it's true that using a TLE for each grouplist item is a waste of\n> space, and that representing the grouplist as simply a list of expr's\n> would be good enough. But pulling out the TLE decoration seems like\n> it's not an appropriate use of time at this stage of the release cycle.\n> I'd say hold off till after 6.5, then fold it in with the parsetree\n> redesign that you keep muttering we need (I agree!).\n\n I'm sure since I began to look at it that it's not such a\n good idea to make those changes at this stage. But in fact\n some of them have to be done to make GROUP BY in views more\n robust.\n\n>\n> BTW, you keep using the term \"RTE\" which I'm not familiar with ---\n> I assume it's just referring to the parse tree nodes?\n\n RTE == RangeTableEntry. The relations where the data in the\n querytree is coming from. The varno in Var nodes is the index\n into that table so the varno plus the varattno identify one\n relations attribute.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 12:22:03 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] More on GROUP BY"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> The real problem is that it is hard to keep all these items synchronized\n> as they pass around through the stages. I looked at the aggregates, and\n> it looks like that has a separate copy too, so it may not be that bad.\n> We may just be missing a pass that makes appropriate changes in GROUP\n> clauses, but I am not sure.\n\n Exactly the probability of missing some changes somewhere is\n why I would like to get rid of the field entry in\n GroupClause. Having to keep duplicate items synced isn't the\n spirit of a relational database. It's like using triggers to\n keep two tables in sync where a simple reference and a view\n would do a better job.\n\n> > I think it's true that using a TLE for each grouplist item is a waste of\n> > space, and that representing the grouplist as simply a list of expr's\n> > would be good enough. But pulling out the TLE decoration seems like\n> > it's not an appropriate use of time at this stage of the release cycle.\n> > I'd say hold off till after 6.5, then fold it in with the parsetree\n> > redesign that you keep muttering we need (I agree!).\n>\n> Basically, it is my fault that I am bringing up the issue so late. If I\n> had done the Open Items list earlier, we would not be so close to the\n> final.\n\n And my fault to spend too much time playing around with a\n raytracer. Developers should develop, publishers should\n publish.\n\n>\n> Jan is currently researching it, and we have the regression tests and\n> three weeks. He usually does a fine job of fixing things.\n\n Thanks for the compliment :-)\n\n> Jan, find\n> out what you think is required to get this working, and if it is not too\n> bad, maybe we should go ahead.\n>\n> What does the Aggreg do? Does it has similar duplication?\n\n Not AFAICS. Aggregates can only appear in the targetlist by\n default. and the nodes below them are the expression to\n aggregate over. If an aggregate should appear in the WHERE\n clause it must be placed into a proper subselect (the rule\n system already tries to do so if an aggregate column is used\n in the WHERE of a view select, but fails sometimes).\n\n I'll go ahead now in little steps.\n\n 1. Get rid of the TLE copy in GroupClause.\n\n 2. Move the targetlist expansion into the rule system.\n\n 3. Rewrite the subquery creation for aggregates in the WHERE\n clause to take view grouping into account.\n\n 4. Allow qualifications against aggregates to be given as\n \"AggCol op Value\" by swapping the expression and using\n the negator operator (if one exists).\n\n As you said, we have three weeks. Let's see what one Wieck\n can do in this time :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 13:13:08 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] More on GROUP BY"
},
{
"msg_contents": "I wrote:\n\n> I'll go ahead now in little steps.\n>\n> 1. Get rid of the TLE copy in GroupClause.\n\n Done. GroupClause now identifies the TLE by a number which is\n placed into the Resdom by parser/rewriter.\n\n New initdb required because of modifications in node\n print/read functions.\n\n>\n> 2. Move the targetlist expansion into the rule system.\n\n Not required anymore AFAICS. Instead I modified the\n targetlist preprocessing in the planner so that junk\n attributes used by group clauses get added again to the\n expanded list.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n",
"msg_date": "Wed, 12 May 1999 17:16:11 +0200 (MET DST)",
"msg_from": "[email protected] (Jan Wieck)",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] More on GROUP BY"
}
] |
[
{
"msg_contents": "Attached is an installation FAQ for SCO UnixWare and OpenServer related\nissues. It should be placed in the doc/ directory along with the other\nplatform-specific FAQs. In addition, it would be nice to have it on the\n\"Documentation and FAQs\" web page, in the \"Platform-specific FAQs\" list,\nalong with Linux, Irix, and HP-UX.\n\nAndrew Merrill\nThe Computer Classroom, Inc., a SCO Authorized Education Center",
"msg_date": "Tue, 11 May 1999 10:19:03 -0700",
"msg_from": "Andrew Merrill <[email protected]>",
"msg_from_op": true,
"msg_subject": "new SCO installation FAQ"
}
] |
[
{
"msg_contents": "Hi,\n\nI have found a bug that is very easy to reprouduce here:\n\nThe table is created with this code:\n\ncreate table DIAGNOSE (ART varchar(1) NOT NULL, AUFENTHALTNR int NOT NULL, \nCREATION_DATE datetime, CREATION_USER varchar(30), DAUERDIAGNOSENR int NOT \nNULL, MODIFICATION_DATE datetime, MODIFICATION_USER varchar(30), NR int NOT \nNULL, OPNR int, TEXT varchar(250) NOT NULL, VERWEILDAUER int, ZEIT datetime);\n\nCREATE UNIQUE INDEX DIAGNOSE_NR ON DIAGNOSE (NR);\n\nThen create a file that contains 100 000 times this line and save it as \n/tmp/DBTEST.sql:\n\nINSERT INTO DIAGNOSE (CREATION_DATE, CREATION_USER, VERWEILDAUER, ART, \nTEXT, AUFENTHALTNR, NR, OPNR, DAUERDIAGNOSENR, MODIFICATION_USER, \nMODIFICATION_DATE, ZEIT) VALUES ('Fri May 07 21:10:52 1999 CEST', 'USER', \nNULL, 'X', 'Noch nicht eingegeben', 71108, 316707, NULL, 0, 'USER', 'Fri May \n07 21:10:52 1999 CEST', 'Fri May 07 21:10:52 1999 CEST');\n\n\nThen do\n\npsql -q -f /tmp/DBTEST.sql mydb\n\nWait.\n\nThe error messages are ok. But not the closing of the backend!\n\nIs this bug known? Are there any patches out? I use postgresql-6.4.2 on NetBSD.\n\nNOTE: I have only subscribed [email protected]\n\n---\n _ _\n _(_)(_)_ David Wetzel, Turbocat's Development,\n(_) __ (_) Buchhorster Strasse, D-16567 Muehlenbeck/Berlin, FRG,\n _/ \\_ Fax +49 33056 82835 NeXTmail [email protected]\n (______) http://www.turbocat.de/\n DEVELOPMENT * CONSULTING * ADMINISTRATION\n",
"msg_date": "Wed, 12 May 99 02:37:16 +0200",
"msg_from": "David Wetzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "backend closed the channel unexpectedly"
},
{
"msg_contents": "David Wetzel <[email protected]> writes:\n> I have found a bug that is very easy to reprouduce here:\n\n> [ backend crash after many attempts to insert same data into a\n> uniquely-indexed table ]\n\nThis may be the same bug that was discussed earlier today on the hackers\nlist: the backend fails to release temporary memory after an error, so\nif you make it generate enough error messages during one session you can\neventually run out of memory. Watch the process with 'top' or some such\nto see if it gets up to your system's limit on process size just before\nfailing.\n\nI'm hoping to look into the cause of the memory leak shortly, but\nno guarantees about how soon it might be fixed ;-).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 19:15:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] backend closed the channel unexpectedly "
},
{
"msg_contents": "> From: Tom Lane <[email protected]>\n> Sender: [email protected]\n>\n> David Wetzel <[email protected]> writes:\n> > I have found a bug that is very easy to reprouduce here:\n>\n> > [ backend crash after many attempts to insert same data into a\n> > uniquely-indexed table ]\n>\n> This may be the same bug that was discussed earlier today on the hackers\n\nI cc'ed the mail to [email protected], [email protected] \nBut I only subscribe to [email protected].\n\n(...)\n> I'm hoping to look into the cause of the memory leak shortly, but\n> no guarantees about how soon it might be fixed ;-).\n>\n> \t\t\tregards, tom lane\n\nGreat. What we did as bugfix in our importer program is to close and reopen \nthe connection to the backend each 1000 rows. This is very ugly but seems to \nwork until someone with the backend-knowlege has fixed the bug.\n\nFirst we thought that we made a mistake in our EOAdaptor*) for PSQL and \ndebugged that beast for many hours...\n\nDave\n\n*) For EOF and WebObjects from Apple/NeXT see www.turbocat.de\n _ _\n _(_)(_)_ David Wetzel, Turbocat's Development,\n(_) __ (_) Buchhorster Strasse, D-16567 Muehlenbeck/Berlin, FRG,\n _/ \\_ Fax +49 33056 82835 NeXTmail [email protected]\n (______) http://www.turbocat.de/\n DEVELOPMENT * CONSULTING * ADMINISTRATION\n",
"msg_date": "Thu, 13 May 99 01:41:29 +0200",
"msg_from": "David Wetzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PORTS] Re: [HACKERS] backend closed the channel unexpectedly "
}
] |
[
{
"msg_contents": "I have re-sorted the list:\n\n---------------------------------------------------------------------------\n\n\nDefault of '' causes crash in some cases\nshift/reduce conflict in grammar, SELECT ... FOR [UPDATE|CURSOR]\ncreate table \"AA\" ( x int4 , y serial ); insert into \"AA\" (x) values (1); fails\nSELECT 1; SELECT 2 fails when sent not via psql, semicolon problem\nSELECT * FROM test WHERE test IN (SELECT * FROM test) fails with strange error\nCREATE OPERATOR *= (leftarg=_varchar, rightarg=varchar, \n\tprocedure=array_varchareq); fails, varchar is reserved word, quotes work\nCLUSTER failure if vacuum has not been performed in a while\nImprove Subplan list handling\nAllow Subplans to use efficient joins(hash, merge) with upper variable\nImprove NULL parameter passing into functions\nTable with an element of type inet, will show \"0.0.0.0/0\" as \"00/0\"\nWhen creating a table with either type inet or type cidr as a primary,unique\n key, the \"198.68.123.0/24\" and \"198.68.123.0/27\" are considered equal\nAllow ESCAPE '\\' at the end of LIKE for ANSI compliance, or rewrite the\n\tLIKE handling by rewriting the user string with the supplied ESCAPE\nFix leak for expressions?, aggregates?\nRemove ERROR: check_primary_key: even number of arguments should be specified\nImprove LIMIT processing by using index to limit rows processed\nAllow \"col AS name\" to use name in WHERE clause? Is this ANSI? \n\tWorks in GROUP BY\nUpdate reltuples from COPY command\nTrigger regression test fails\nnodeResults.c and parse_clause.c give compiler warnings\nMove LIKE index optimization handling to the optimizer?\nMVCC locking, deadlock, priorities?\nMake sure pg_internal.init generation can't cause unreliability\nSELECT ... WHERE col ~ '(foo|bar)' works, but CHECK on table always fails\nCREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops) fails\n\tindex can't store constant parameters, allow SQL function indexes?\nHave hashjoins use portals, not fixed-size memory\nDROP TABLE leaves INDEX file descriptor open\nALTER TABLE ADD COLUMN to inherited table put column in wrong place\nresno's, sublevelsup corrupt when reaching rewrite system\ncrypt_loadpwdfile() is mixing and (mis)matching memory allocation\n protocols, trying to use pfree() to release pwd_cache vector from realloc()\n3 = sum(x) in rewrite system is a problem\n\nDo we want pg_dump -z to be the default?\npg_dump of groups fails\npg_dump -o -D does not work, and can not work currently, generate error?\npg_dump does not preserver NUMERIC precision, psql \\d should show precision\ndumping out sequences should not be counted in pg_dump display\n\nCREATE VIEW ignores DISTINCT?\nORDER BY mixed with DISTINCT causes duplicates\nCREATE TABLE t1 (a int4, b int4); CREATE VIEW v1 AS SELECT b, count(b)\n\tFROM t1 GROUP BY b; SELECT count FROM v1; fails\n\nDROP TABLE/RENAME TABLE doesn't remove extended files, *.1, *.2\nMulti-segment indexes?\nVacuum of tables >2 gigs - NOTICE: Can't truncate multi-segments relation\n\nWrite up CASE(), COALESCE(), IFNULL()\nAdd Vadim's isolation level syntax to gram.y, preproc.y\nMarkup sql.sgml, Stefan's intro to SQL\nMarkup cvs.sgml, cvs and cvsup howto\nAdd figures to sql.sgml and arch-dev.sgml, both from Stefan\nInclude Jose's date/time history in User's Guide (neat!)\nGenerate Admin, User, Programmer hardcopy postscript\n\n\nMake Serial its own type?\nAdd support for & operator\nstore binary-compatible type information in the system somewhere \nadd ability to add comments to system tables using table/colname combination\nprocess const=const parts of OR clause in separate pass\nmake oid use oidin/oidout not int4in/int4out in pg_type.h, make oid use\n\tunsigned int more reliably, pg_atoi()\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 May 1999 22:32:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Current Open Items"
}
] |
[
{
"msg_contents": "> CREATE [{ GLOBAL | LOCAL } TEMPORARY ] TABLE class_name\n\npostgres=> create local temporary table tt (i int);\nCREATE\npostgres=> create global temporary table tg (i int);\nERROR: GLOBAL TEMPORARY TABLE is not currently supported\npostgres=> create temporary table tn (i int);\nCREATE\n\nI just adjusted the source tree to include this, along with internal\nchanges to integrate Vadim's transaction level and locking syntax.\nThis will probably require a full \"make clean install\" since some\nkeywords were added.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 12 May 1999 07:36:35 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CREATE TEMP TABLE"
}
] |
[
{
"msg_contents": "> \n> > > I have applied a new refint from Massimo. Can people check that and\n> > > make the needed fixes to it and trigger.sql?\n> > > \n> > Since you seem to want this functionality, could you alter\n> > check_primary_key, \n> > so that it chooses a default action of \"dependent\", and not force us to\n> > specify an action ?\n> \n> I applied it because it was supplied to me. I have no opinion on way or\n> the other. If you want to reverse this all out, feel free, or add the\n> dependent option.\n> \nI am sorry, I misinterpreted the still failing trigger regression test. The\noffending code\nhas been removed, the action is now always dependent :-)\n\nI suggest the following patch, to finally make trigger regression happy\nagain:\n\n <<refint1.patch>> \nAfter that you can remove the following from TODO:\nRemove ERROR: check_primary_key: even number of arguments should be\nspecified\nTrigger regression test fails\n\nAndreas",
"msg_date": "Wed, 12 May 1999 10:50:07 +0200",
"msg_from": "ZEUGSWETTER Andreas IZ5 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] misc and triggers regression tests failed on 6. 5be t\n a1"
},
{
"msg_contents": "\nApplied.\n\n\n> > \n> > > > I have applied a new refint from Massimo. Can people check that and\n> > > > make the needed fixes to it and trigger.sql?\n> > > > \n> > > Since you seem to want this functionality, could you alter\n> > > check_primary_key, \n> > > so that it chooses a default action of \"dependent\", and not force us to\n> > > specify an action ?\n> > \n> > I applied it because it was supplied to me. I have no opinion on way or\n> > the other. If you want to reverse this all out, feel free, or add the\n> > dependent option.\n> > \n> I am sorry, I misinterpreted the still failing trigger regression test. The\n> offending code\n> has been removed, the action is now always dependent :-)\n> \n> I suggest the following patch, to finally make trigger regression happy\n> again:\n> \n> <<refint1.patch>> \n> After that you can remove the following from TODO:\n> Remove ERROR: check_primary_key: even number of arguments should be\n> specified\n> Trigger regression test fails\n> \n> Andreas\n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 08:47:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] misc and triggers regression tests failed on 6. 5be t\n a1"
}
] |
[
{
"msg_contents": "Is this a known problem?\n\nCREATE TABLE test (\nplt int2 PRIMARY KEY,\nstate CHAR(5) NOT NULL DEFAULT 'new',\nused boolean NOT NULL DEFAULT 'f',\nid int4\n);\n\nINSERT INTO test (plt, id) VALUES (1, 1);\nINSERT INTO test (plt, id) VALUES (2, 2);\nINSERT INTO test (plt, id) VALUES (3, 3);\n\nSELECT * FROM test;\n\nplt|state|used|id\n---+-----+----+--\n 1|new |f | 1\n 2|new |f | 2\n 3|new |f | 3\n(3 rows)\n\nUPDATE test SET state = 'diff' WHERE plt = 1;\nSELECT * FROM test;\n\nplt|state|used| id\n---+-----+----+-----\n 2|new |f | 2\n 3|new |f | 3\n 1|diff |t |26144\n(3 rows)\n\n???\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 12 May 1999 18:13:26 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "defalut value"
},
{
"msg_contents": "> Is this a known problem?\n> \n> CREATE TABLE test (\n> plt int2 PRIMARY KEY,\n> state CHAR(5) NOT NULL DEFAULT 'new',\n> used boolean NOT NULL DEFAULT 'f',\n> id int4\n> );\n> \n> INSERT INTO test (plt, id) VALUES (1, 1);\n> INSERT INTO test (plt, id) VALUES (2, 2);\n> INSERT INTO test (plt, id) VALUES (3, 3);\n> \n> SELECT * FROM test;\n> \n> plt|state|used|id\n> ---+-----+----+--\n> 1|new |f | 1\n> 2|new |f | 2\n> 3|new |f | 3\n> (3 rows)\n> \n> UPDATE test SET state = 'diff' WHERE plt = 1;\n> SELECT * FROM test;\n> \n> plt|state|used| id\n> ---+-----+----+-----\n> 2|new |f | 2\n> 3|new |f | 3\n> 1|diff |t |26144\n> (3 rows)\n> \n> ???\n\nThis is scary, but not unexpected. I have a bug report in my mailbox\nthat describes a similar problem with default. I am sure it is the same\ncause. Somewhere, default is broken, and it is on the Open Items list. \nI believe it is an improper default length field or rounding of length. \nI looked at it once, but could not find the cause.\n\nReport is below.\n\n---------------------------------------------------------------------------\n\n\nHi,\n\nI found a bug in 6.4.2 which seems to be\nrelated to the char(n) type and shows up\nif one assigns a zero-length default value.\n\nHere is an example:\n\n\ntest=> create table t1 (\ntest-> str1 char(2) default '', <---- note this one\ntest-> str2 text default '',\ntest-> str3 text default ''\ntest-> );\nCREATE\n\ntest=> insert into t1 values ('aa', 'string2', 'string3');\nINSERT 91278 1\ntest=> insert into t1 (str3) values ('string3');\nINSERT 91279 1\ntest=>test=> select * from t1;\nBackend message type 0x44 arrived while idle\nBackend message type 0x44 arrived while idle\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nIf the table is created as\n\ncreate table t1 (\n str1 char(2) default ' ',\n str2 text default '',\n str3 text default ''\n);\n\nthe crash doesn't happen.\n\nRegards\nErich\n\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 May 1999 08:59:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] defalut value"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> 1|new |f | 1\n>> \n>> UPDATE test SET state = 'diff' WHERE plt = 1;\n>>\n>> 1|diff |t |26144\n>> \n>> ???\n\n> This is scary, but not unexpected. I have a bug report in my mailbox\n> that describes a similar problem with default. I am sure it is the\n> same cause. Somewhere, default is broken, and it is on the Open Items\n> list.\n\nBut the value looks correct at the first SELECT, so how could it be the\nfault of the DEFAULT clause? Seems you are right though, because I can\nreproduce the error given the stated table definition --- but not when\nthere's no defaults.\n\nInteresting data point: my value for the trashed ID field comes out as\n543555584 = 0x20660000, versus Tatsuo's 26144 = 0x00006620. My HP box\nis big-endian hardware, and I'm guessing that Tatsuo is using something\nlittle-endian. The data looks like it is the 'f' and space that would\nbe at the end of the \"state\" field. How did this get over into the \"id\"\nfield, especially without corrupting \"used\" in between?\n\nEven more interesting: if I declare the state field as\n\tstate CHAR(5) NOT NULL DEFAULT 'new ',\nall else the same, there's no error.\n\n> I believe it is an improper default length field or rounding of length. \n\nI think somehow, somewhere, the size of the default value is getting\nused instead of the size of the field itself. Weird. Is it specific\nto char(n), perhaps? That might help explain how the bug got past\nthe regression tests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 May 1999 10:59:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] defalut value "
},
{
"msg_contents": "> I think somehow, somewhere, the size of the default value is getting\n> used instead of the size of the field itself. Weird. Is it specific\n> to char(n), perhaps? That might help explain how the bug got past\n> the regression tests.\n\nI'm pretty sure it is specific to char(n), and it is due to the 4 byte\ndifference in length between the string and the storage. When we fix\nthis one we will also fix the \"c char(2) default ''\" problem too.\n\nIn fact, I could have sworn I already had looked at it. Oh well.\n\n - Tom\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Wed, 12 May 1999 15:30:00 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] defalut value"
},
{
"msg_contents": "\nThis is now fixed in 6.5.\n\n\n> Is this a known problem?\n> \n> CREATE TABLE test (\n> plt int2 PRIMARY KEY,\n> state CHAR(5) NOT NULL DEFAULT 'new',\n> used boolean NOT NULL DEFAULT 'f',\n> id int4\n> );\n> \n> INSERT INTO test (plt, id) VALUES (1, 1);\n> INSERT INTO test (plt, id) VALUES (2, 2);\n> INSERT INTO test (plt, id) VALUES (3, 3);\n> \n> SELECT * FROM test;\n> \n> plt|state|used|id\n> ---+-----+----+--\n> 1|new |f | 1\n> 2|new |f | 2\n> 3|new |f | 3\n> (3 rows)\n> \n> UPDATE test SET state = 'diff' WHERE plt = 1;\n> SELECT * FROM test;\n> \n> plt|state|used| id\n> ---+-----+----+-----\n> 2|new |f | 2\n> 3|new |f | 3\n> 1|diff |t |26144\n> (3 rows)\n> \n> ???\n> --\n> Tatsuo Ishii\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Jul 1999 23:21:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] defalut value"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.